Search Results: "ender"

6 January 2024

Valhalla's Things: Blog updates

Posted on January 6, 2024
Tags: madeof:bits, meta
After just a tiny1 delay I ve finally added support for tags to this blog. In the next few days I may go back and add / change tags to the older posts, or I may not, I ll decide. Also, I still need to render a tag cloud somewhere; maybe it will happen soon, maybe it will take another year. :D I hope I ve also succesfully worked around the bug in Friendica where the src for images got deleted instead of properly converted. The logo of this blog, just to see whether a random image is shown on Friendica and the Fediverse.

  1. less than one year!

1 January 2024

Russ Allbery: 2023 Book Reading in Review

In 2023, I finished and reviewed 53 books, continuing a trend of year-over-year increases and of reading the most books since 2012 (the last year I averaged five books a month). Reviewing continued to be uneven, with a significant slump in the summer and smaller slumps in February and November, and a big clump of reviews finished in October in addition to my normal year-end reading and reviewing vacation. The unevenness this year was mostly due to finishing books and not writing reviews immediately. Reviews are much harder to write when the finished books are piling up, so one goal for 2024 is to not let that happen again. I enter the new year with one book finished and not yet reviewed, after reading a book about every day and a half during my December vacation. I read two all-time favorite books this year. The first was Emily Tesh's debut novel Some Desperate Glory, which is one of the best space opera novels I have ever read. I cannot improve on Shelley Parker-Chan's blurb for this book: "Fierce and heartbreakingly humane, this book is for everyone who loved Ender's Game, but Ender's Game didn't love them back." This is not hard science fiction but it is fantastic character fiction. It was exactly what I needed in the middle of a year in which I was fighting a "burn everything down" mood. The second was Night Watch by Terry Pratchett, the 29th Discworld and 6th Watch novel. Throughout my Discworld read-through, Pratchett felt like he was on the cusp of a truly stand-out novel, one where all the pieces fit and the book becomes something more than the sum of its parts. This was that book. It's a book about ethics and revolutions and governance, but also about how your perception of yourself changes as you get older. It does all of the normal Pratchett things, just... better. While I would love to point new Discworld readers at it, I think you do have to read at least the Watch novels that came before it for it to carry its proper emotional heft. This was overall a solid year for fiction reading. I read another 15 novels I rated 8 out of 10, and 12 that I rated 7 out of 10. The largest contributor to that was my Discworld read-through, which was reliably entertaining throughout the year. The run of Discworld books between The Fifth Elephant (read late last year) and Wintersmith (my last of this year) was the best run of Discworld novels so far. One additional book I'll call out as particularly worth reading is Thud!, the Watch novel after Night Watch and another excellent entry. I read two stand-out non-fiction books this year. The first was Oliver Darkshire's delightful memoir about life as a rare book seller, Once Upon a Tome. One of the things I will miss about Twitter is the regularity with which I stumbled across fascinating people and then got to read their books. I'm off Twitter permanently now because the platform is designed to make me incoherently angry and I need less of that in my life, but it was very good at finding delightfully quirky books like this one. My other favorite non-fiction book of the year was Michael Lewis's Going Infinite, a profile of Sam Bankman-Fried. I'm still bemused at the negative reviews that this got from people who were upset that Lewis didn't turn the story into a black-and-white morality play. Bankman-Fried's actions were clearly criminal; that's not in dispute. Human motivations can be complex in ways that are irrelevant to the law, and I thought this attempt to understand that complexity by a top-notch storyteller was worthy of attention. Also worth a mention is Tony Judt's Postwar, the first book I reviewed in 2023. A sprawling history of post-World-War-II Europe will never have the sheer readability of shorter, punchier books, but this was the most informative book that I read in 2023. 2024 should see the conclusion of my Discworld read-through, after which I may return to re-reading Mercedes Lackey or David Eddings, both of which I paused to make time for Terry Pratchett. I also have another re-read similar to my Chronicles of Narnia reviews that I've been thinking about for a while. Perhaps I will start that next year; perhaps it will wait for 2025. Apart from that, my intention as always is to read steadily, write reviews as close to when I finished the book as possible, and make reading time for my huge existing backlog despite the constant allure of new releases. Here's to a new year full of more new-to-me books and occasional old favorites. The full analysis includes some additional personal reading statistics, probably only of interest to me.

29 December 2023

Russ Allbery: Review: The Afterward

Review: The Afterward, by E.K. Johnston
Publisher: Dutton Books
Copyright: February 2019
Printing: 2020
ISBN: 0-7352-3190-7
Format: Kindle
Pages: 339
The Afterward is a standalone young adult high fantasy with a substantial romance component. The title is not misspelled. Sir Erris and her six companions, matching the number of the new gods, were successful in their quest for the godsgem. They defeated the Old God and destroyed Him forever, freeing King Dorrenta from his ensorcellment, and returned in triumph to Cadrium to live happily ever after. Or so the story goes. Sir Erris and three of the companions are knights. Another companion is the best mage in the kingdom. Kalanthe Ironheart, who distracted the Old God at a critical moment and allowed Sir Erris to strike, is only an apprentice due to her age, but surely will become a great knight. And then there is Olsa Rhetsdaughter, the lowborn thief, now somewhat mockingly called Thief of the Realm for all the good that does her. The reward was enough for her to buy her freedom from the Thief's Court. It was not enough to pay for food after that, or enough for her to change her profession, and the Thief's Court no longer has any incentive to give her easy (or survivable) assignments. Kalanthe is in a considerably better position, but she still needs a good marriage. Her reward paid off half of her debt, which broadens her options, but she's still a debt-knight, liable for the full cost of her training once she reaches the age of nineteen. She's mostly made her peace with the decisions she made given her family's modest means, but marriages of that type are usually for heirs, and Kalanthe is not looking forward to bearing a child. Or, for that matter, sleeping with a man. Olsa and Kalanthe fell in love during the Quest. Given Kalanthe's debt and the way it must be paid, and her iron-willed determination to keep vows, neither of them expected their relationship to survive the end of the Quest. Both of them wish that it had. The hook is that this novel picks up after the epic fantasy quest is over and everyone went home. This is not an entirely correct synopsis; chapters of The Afterward alternate between "After" and "Before" (and one chapter delightfully titled "More or less the exact moment of"), and by the end of the book we get much of the story of the Quest. It's not told from the perspective of the lead heroes, though; it's told by following Kalanthe and Olsa, who would be firmly relegated to supporting characters in a typical high fantasy. And it's largely told through the lens of their romance. This is not the best fantasy novel I've read, but I had a fun time with it. I am now curious about the intended audience and marketing, though. It was published by a YA imprint, and both the ages of the main characters and the general theme of late teenagers trying to chart a course in an adult world match that niche. But it's also clearly intended for readers who have read enough epic fantasy quests that they will both be amused by the homage and not care that the story elides a lot of the typical details. Anyone who read David Eddings at an impressionable age will enjoy the way Johnston pokes gentle fun at The Belgariad (this book is dedicated to David and Leigh Eddings), but surely the typical reader of YA fantasy these days isn't also reading Eddings. I'm therefore not quite sure who this book was for, but apparently that group included me. Johnston thankfully is not on board with the less savory parts of Eddings's writing, as you might have guessed from the sapphic romance. There is no obnoxious gender essentialism here, although there do appear to be gender roles that I never quite figured out. Knights are referred to as sir, but all of the knights in this story are women. Men still seem to run a lot of things (kingdoms, estates, mage colleges), but apart from the mage, everyone on the Quest was female, and there seems to be an expectation that women go out into the world and have adventures while men stay home. I'm not sure if there was an underlying system that escaped me, or if Johnston just mixed things up for the hell of it. (If the latter, I approve.) This book does suffer a bit from addressing some current-day representation issues without managing to fold them naturally into the story or setting. One of the Quest knights is transgender, something that's revealed in a awkward couple of paragraphs and then never mentioned again. Two of the characters have a painfully earnest conversation about the word "bisexual," complete with a strained attempt at in-universe etymology. Racial diversity (Olsa is black, and Kalanthe is also not white) seemed to be handled a bit better, although I am not the reader to notice if the discussions of hair maintenance were similarly awkward. This is way better than no representation and default-white characters, to be clear, but it felt a bit shoehorned in at times and could have used some more polish. These are quibbles, though. Olsa was the heart of the book for me, and is exactly the sort of character I like to read about. Kalanthe is pure stubborn paladin, but I liked her more and more as the story continued. She provides a good counterbalance to Olsa's natural chaos. I do wish Olsa had more opportunities to show her own competence (she's not a very good thief, she's just the thief that Sir Erris happened to know), but the climax of the story was satisfying. My main grumble is that I badly wanted to dwell on the happily-ever-after for at least another chapter, ideally two. Johnston was done with the story before I was. The writing was serviceable but not great and there are some bits that I don't think would stand up to a strong poke, but the characters carried the story for me. Recommended if you'd like some sapphic romance and lightweight class analysis complicating your Eddings-style quest fantasy. Rating: 7 out of 10

24 December 2023

Russ Allbery: Review: Liberty's Daughter

Review: Liberty's Daughter, by Naomi Kritzer
Publisher: Fairwood Press
Copyright: November 2023
ISBN: 1-958880-16-7
Format: Kindle
Pages: 257
Liberty's Daughter is a stand-alone near-future science fiction fix-up novel. The original stories were published in Fantasy and Science Fiction between 2012 and 2015. Beck Garrison lives on New Minerva (Min), one of a cluster of libertarian seasteads 220 nautical miles off the coast of Los Angeles. Her father brought her to Min when she was four, so it's the only life she knows. As this story opens, she's picked up a job for pocket change: finding very specific items that people want to buy. Since any new goods have to be shipped in and the seasteads have an ambiguous legal status, they don't get Amazon deliveries, but there are enough people (and enough tourists who bring high-value goods for trade) that someone probably has whatever someone else is looking for. Even sparkly high-heeled sandals size eight. Beck's father is high in the informal power structure of the seasteads for reasons that don't become apparent until very late in this book. Beck therefore has a comfortable, albeit cramped, life. The social protections, self-confidence, and feelings of invincibility that come with that wealth serve her well as a finder. After the current owner of the sandals bargains with her to find a person rather than an object, that privilege also lets her learn quite a lot before she starts getting into trouble. The political background of this novel is going to require some suspension of disbelief. The premise is that one of those harebrained libertarian schemes to form a freedom utopia has been successful enough to last for 49 years and attract 80,000 permanent residents. (It's a libertarian seastead so a lot of those residents are indentured slaves, as one does in libertarian philosophy. The number of people with shares, like Beck's father, is considerably smaller.) By the end of the book, Kritzer has offered some explanations for why the US would allow such a place to continue to exist, but the chances of the famously fractious con artists and incompetents involved in these types of endeavors creating something that survived internal power struggles for that long seem low. One has to roll with it for story reasons: Kritzer needs the population to be large enough for a plot, and the history to be long enough for Beck to exist as a character. The strength of this book is Beck, and specifically the fact that Beck is a second-generation teenager who grew up on the seastead. Unlike a lot of her age peers with their Cayman Islands vacations, she's never left and has no experience with life on land. She considers many things to be perfectly normal that are not at all normal to the reader and the various reader surrogates who show up over the course of the book. She also has the instinctive feel for seastead politics of the child of a prominent figure in a small town. And, most importantly, she has formed her own sense of morality and social structure that matches neither that of the reader nor that of her father. Liberty's Daughter is told in first-person by Beck. Judging the authenticity of Gen-Z thought processes is not one of my strengths, but Beck felt right to me. Her narration is dryly matter-of-fact, with only brief descriptions of her emotional reactions, but her personality shines in the occasional sarcasm and obstinacy. Kritzer has the teenage bafflement at the stupidity of adults down pat, as well as the tendency to jump head-first into ideas and make some decisions through sheer stubbornness. This is not one of those fix-up novels where the author has reworked the stories sufficiently that the original seams don't show. It is very episodic; compared to a typical novel of this length, there's more plot but less character growth. It's a good book when you want to be pulled into a stream of events that moves right along. This is not the book for deep philosophical examinations of the basis of a moral society, but it does have, around the edges, is the humans build human societies and develop elaborate social conventions and senses of belonging no matter how stupid the original philosophical foundations were. Even societies built on nasty exploitation can engender a sort of loyalty. Beck doesn't support the worst parts of her weird society, but she wants to fix it, not burn it to the ground. I thought there was a profound observation there. That brings me to my complaint: I hated the ending. Liberty's Daughter is in part Beck's fight for her own autonomy, both moral and financial, and the beginnings of an effort to turn her home into the sort of home she wants. By the end of the book, she's testing the limits of what she can accomplish, solidifying her own moral compass, and deciding how she wants to use the social position she inherited. It felt like the ending undermined all of that and treated her like a child. I know adolescence comes with those sorts of reversals, but I was still so mad. This is particularly annoying since I otherwise want to recommend this book. It's not ground-breaking, it's not that deep, but it was a thoroughly enjoyable day's worth of entertainment with a likable protagonist. Just don't read the last chapter, I guess? Or have more tolerance than I have for people treating sixteen-year-olds as if they're not old enough to make decisions. Content warnings: pandemic. Rating: 7 out of 10

22 December 2023

Gunnar Wolf: Pushing some reviews this way

Over roughly the last year and a half I have been participating as a reviewer in ACM s Computing Reviews, and have even been honored as a Featured Reviewer. Given I have long enjoyed reading friends reviews of their reading material (particularly, hats off to the very active Russ Allbery, who both beats all of my frequency expectations (I could never sustain the rythm he reads to!) and holds documented records for his >20 years as a book reader, with far more clarity and readability than I can aim for!), I decided to explicitly share my reviews via this blog, as the audience is somewhat congruent; I will also link here some reviews that were not approved for publication, clearly marking them so. I will probably work on wrangling my Jekyll site to display an (auto-)updated page and RSS feed for the reviews. In the meantime, the reviews I have published are:

20 December 2023

Ulrike Uhlig: How volunteer work in F/LOSS exacerbates pre-existing lines of oppression, and what that has to do with low diversity

This is a post I wrote in June 2022, but did not publish back then. After first publishing it in December 2023, a perfectionist insecure part of me unpublished it again. After receiving positive feedback, i slightly amended and republish it now. In this post, I talk about unpaid work in F/LOSS, taking on the example of hackathons, and why, in my opinion, the expectation of volunteer work is hurting diversity. Disclaimer: I don t have all the answers, only some ideas and questions.

Previous findings In 2006, the Flosspols survey searched to explain the role of gender in free/libre/open source software (F/LOSS) communities because an earlier [study] revealed a significant discrepancy in the proportion of men to women. It showed that just about 1.5% of F/LOSS community members were female at that time, compared with 28% in proprietary software (which is also a low number). Their key findings were, to name just a few:
  • that F/LOSS rewards the producing code rather than the producing software. It thereby puts most emphasis on a particular skill set. Other activities such as interface design or documentation are understood as less technical and therefore less prestigious.
  • The reliance on long hours of intensive computing in writing successful code means that men, who in general assume that time outside of waged labour is theirs , are freer to participate than women, who normally still assume a disproportionate amount of domestic responsibilities. Female F/LOSS participants, however, seem to be able to allocate a disproportionate larger share of their leisure time for their F/LOSS activities. This gives an indication that women who are not able to spend as much time on voluntary activities have difficulties to integrate into the community.
We also know from the 2016 Debian survey, published in 2021, that a majority of Debian contributors are employed, rather than being contractors, and rather than being students. Also, 95.5% of respondents to that study were men between the ages of 30 and 49, highly educated, with the largest groups coming from Germany, France, USA, and the UK. The study found that only 20% of the respondents were being paid to work on Debian. Half of these 20% estimate that the amount of work on Debian they are being paid for corresponds to less than 20% of the work they do there. On the other side, there are 14% of those who are being paid for Debian work who declared that 80-100% of the work they do in Debian is remunerated.

So, if a majority of people is not paid, why do they work on F/LOSS? Or: What are the incentives of free software? In 2021, Louis-Philippe V ronneau aka Pollo, who is not only a Debian Developer but also an economist, published his thesis What are the incentive structures of free software (The actual thesis was written in French). One very interesting finding Pollo pointed out is this one:
Indeed, while we have proven that there is a strong and significative correlation between the income and the participation in a free/libre software project, it is not possible for us to pronounce ourselves about the causality of this link.
In the French original text:
En effet, si nous avons prouv qu il existe une corr lation forte et significative entre le salaire et la participation un projet libre, il ne nous est pas possible de nous prononcer sur la causalit de ce lien.
Said differently, it is certain that there is a relationship between income and F/LOSS contribution, but it s unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software. I would like to scratch this question a bit further, mostly relying on my own observations, experiences, and discussions with F/LOSS contributors.

Volunteer work is unpaid work We often hear of hackathons, hack weeks, or hackfests. I ve been at some such events myself, Tails organized one, the IETF regularly organizes hackathons, and last week (June 2022!) I saw an invitation for a hack week with the Torproject. This type of event generally last several days. While the people who organize these events are being paid by the organizations they work for, participants on the other hand are generally joining on a volunteer basis. Who can we expect to show up at this type of event under these circumstances as participants? To answer this question, I collected some ideas:
  • people who have an employer sponsoring their work
  • people who have a funder/grant sponsoring their work
  • people who have a high income and can take time off easily (in that regard, remember the Gender Pay Gap, women often earn less for the same work than men)
  • people who rely on family wealth (living off an inheritance, living on rights payments from a famous grandparent - I m not making these situations up, there are actual people in such financially favorable situations )
  • people who don t need much money because they don t have to pay rent or pay low rent (besides house owners that category includes people who live in squats or have social welfare paying for their rent, people who live with parents or caretakers)
  • people who don t need to do care work (for children, elderly family members, pets. Remember that most care work is still done by women.)
  • students who have financial support or are in a situation in which they do not yet need to generate a lot of income
  • people who otherwise have free time at their disposal
So, who, in your opinion, fits these unwritten requirements? Looking at this list, it s pretty clear to me why we d mostly find white men from the Global North, generally with higher education in hackathons and F/LOSS development. ( Great, they re a culture fit! ) Yes, there will also always be some people of marginalized groups who will attend such events because they expect to network, to find an internship, to find a better job in the future, or to add their participation to their curriculum. To me, this rings a bunch of alarm bells.

Low diversity in F/LOSS projects a mirror of the distribution of wealth I believe that the lack of diversity in F/LOSS is first of all a mirror of the distribution of wealth on a larger level. And by wealth I m referring to financial wealth as much as to social wealth in the sense of Bourdieu: Families of highly educated parents socially reproducing privilege by allowing their kids to attend better schools, supporting and guiding them in their choices of study and work, providing them with relations to internships acting as springboards into well paid jobs and so on. That said, we should ask ourselves as well:

Do F/LOSS projects exacerbate existing lines of oppression by relying on unpaid work? Let s look again at the causality question of Pollo s research (in my words):
It is unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software.
Maybe we need to imagine this cause-effect relationship over time: as a student, without children and lots of free time, hopefully some money from the state or the family, people can spend time on F/LOSS, collect experience, earn recognition - and later find a well-paid job and make unpaid F/LOSS contributions into a hobby, cementing their status in the community, while at the same time generating a sense of well-being from working on the common good. This is a quite common scenario. As the Flosspols study revealed however, boys often get their own computer at the age of 14, while girls get one only at the age of 20. (These numbers might be slightly different now, and possibly many people don t own an actual laptop or desktop computer anymore, instead they own mobile devices which are not exactly inciting them to look behind the surface, take apart, learn, appropriate technology.) In any case, the above scenario does not allow for people who join F/LOSS later in life, eg. changing careers, to find their place. I believe that F/LOSS projects cannot expect to have more women, people of color, people from working class backgrounds, people from outside of Germany, France, USA, UK, Australia, and Canada on board as long as volunteer work is the status quo and waged labour an earned privilege.

Wait, are you criticizing all these wonderful people who sacrifice their free time to work towards common good? No, that s definitely not my intention, I m glad that F/LOSS exists, and the F/LOSS ecosystem has always represented a small utopia to me that is worth cherishing and nurturing. However, I think we still need to talk more about the lack of diversity, and investigate it further.

Some types of work are never being paid Besides free work at hacking events, let me also underline that a lot of work in F/LOSS is not considered payable work (yes, that s an oxymoron!). Which F/LOSS project for example, has ever paid translators a decent fee? Which project has ever considered that doing the social glue work, often done by women in the projects, is work that should be paid for? Which F/LOSS projects pay the people who do their Debian packaging rather than relying on yet another already well-paid white man who can afford doing this work for free all the while holding up how great the F/LOSS ecosystem is? And how many people on opensourcedesign jobs are looking to get their logo or website done for free? (Isn t that heart icon appealing to your altruistic empathy?) In my experience even F/LOSS projects which are trying to do the right thing by paying everyone the same amount of money per hour run into issues when it turns out that not all hours are equal and that some types of work do not qualify for remuneration at all or that the rules for the clocking of work are not universally applied in the same way by everyone.

Not every interaction should have a monetary value, but Some of you want to keep working without being paid, because that feels a bit like communism within capitalism, it makes you feel good to contribute to the greater good while not having the system determine your value over money. I hear you. I ve been there (and sometimes still am). But as long as we live in this system, even though we didn t choose to and maybe even despise it - communism is not about working for free, it s about getting paid equally and adequately. We may not think about it while under the age of 40 or 45, but working without adequate financial compensation, even half of the time, will ultimately result in not being able to care for oneself when sick, when old. And while this may not be an issue for people who inherit wealth, or have an otherwise safe economical background, eg. an academic salary, it is a huge problem and barrier for many people coming out of the working or service classes. (Oh and please, don t repeat the neoliberal lie that everyone can achieve whatever they aim for, if they just tried hard enough. French research shows that (in France) one has only 30% chance to become a class defector , and change social class upwards. But I managed to get out and move up, so everyone can! - well, if you believe that I m afraid you might be experiencing survivor bias.)

Not all bodies are equally able We should also be aware that not all of us can work with the same amount of energy either. There is yet another category of people who are excluded by the expectation of volunteer work, either because the waged labour they do already eats all of their energy, or because their bodies are not disposed to do that much work, for example because of mental health issues - such as depression-, or because of physical disabilities.

When organizing events relying on volunteer work please think about these things. Yes, you can tell people that they should ask their employer to pay them for attending a hackathon - but, as I ve hopefully shown, that would not do it for many people, especially newcomers. Instead, you could propose a fund to make it possible that people who would not normally attend can attend. DebConf is a good example for having done this for many years.

Conclusively I would like to urge free software projects that have a budget and directly pay some people from it to map where they rely on volunteer work and how this hurts diversity in their project. How do you or your project exacerbate pre-existing lines of oppression by granting or not granting monetary value to certain types of work? What is it that you take for granted? As always, I m curious about your feedback!

Worth a read These ideas are far from being new. Ashe Dryden s well-researched post The ethics of unpaid labor and the OSS community dates back to 2013 and is as important as it was ten years ago.

Melissa Wen: The Rainbow Treasure Map Talk: Advanced color management on Linux with AMD/Steam Deck.

Last week marked a major milestone for me: the AMD driver-specific color management properties reached the upstream linux-next! And to celebrate, I m happy to share the slides notes from my 2023 XDC talk, The Rainbow Treasure Map along with the individual recording that just dropped last week on youtube talk about happy coincidences!

Steam Deck Rainbow: Treasure Map & Magic Frogs While I may be bubbly and chatty in everyday life, the stage isn t exactly my comfort zone (hallway talks are more my speed). But the journey of developing the AMD color management properties was so full of discoveries that I simply had to share the experience. Witnessing the fantastic work of Jeremy and Joshua bring it all to life on the Steam Deck OLED was like uncovering magical ingredients and whipping up something truly enchanting. For XDC 2023, we split our Rainbow journey into two talks. My focus, The Rainbow Treasure Map, explored the new color features we added to the Linux kernel driver, diving deep into the hardware capabilities of AMD/Steam Deck. Joshua then followed with The Rainbow Frogs and showed the breathtaking color magic released on Gamescope thanks to the power unlocked by the kernel driver s Steam Deck color properties.

Packing a Rainbow into 15 Minutes I had so much to tell, but a half-slot talk meant crafting a concise presentation. To squeeze everything into 15 minutes (and calm my pre-talk jitters a bit!), I drafted and practiced those slides and notes countless times. So grab your map, and let s embark on the Rainbow journey together! Slide 1: The Rainbow Treasure Map - Advanced Color Management on Linux with AMD/SteamDeck Intro: Hi, I m Melissa from Igalia and welcome to the Rainbow Treasure Map, a talk about advanced color management on Linux with AMD/SteamDeck. Slide 2: List useful links for this technical talk Useful links: First of all, if you are not used to the topic, you may find these links useful.
  1. XDC 2022 - I m not an AMD expert, but - Melissa Wen
  2. XDC 2022 - Is HDR Harder? - Harry Wentland
  3. XDC 2022 Lightning - HDR Workshop Summary - Harry Wentland
  4. Color management and HDR documentation for FOSS graphics - Pekka Paalanen et al.
  5. Cinematic Color - 2012 SIGGRAPH course notes - Jeremy Selan
  6. AMD Driver-specific Properties for Color Management on Linux (Part 1) - Melissa Wen
Slide 3: Why do we need advanced color management on Linux? Context: When we talk about colors in the graphics chain, we should keep in mind that we have a wide variety of source content colorimetry, a variety of output display devices and also the internal processing. Users expect consistent color reproduction across all these devices. The userspace can use GPU-accelerated color management to get it. But this also requires an interface with display kernel drivers that is currently missing from the DRM/KMS framework. Slide 4: Describe our work on AMD driver-specific color properties Since April, I ve been bothering the DRM community by sending patchsets from the work of me and Joshua to add driver-specific color properties to the AMD display driver. In parallel, discussions on defining a generic color management interface are still ongoing in the community. Moreover, we are still not clear about the diversity of color capabilities among hardware vendors. To bridge this gap, we defined a color pipeline for Gamescope that fits the latest versions of AMD hardware. It delivers advanced color management features for gamut mapping, HDR rendering, SDR on HDR, and HDR on SDR. Slide 5: Describe the AMD/SteamDeck - our hardware AMD/Steam Deck hardware: AMD frequently releases new GPU and APU generations. Each generation comes with a DCN version with display hardware improvements. Therefore, keep in mind that this work uses the AMD Steam Deck hardware and its kernel driver. The Steam Deck is an APU with a DCN3.01 display driver, a DCN3 family. It s important to have this information since newer AMD DCN drivers inherit implementations from previous families but aldo each generation of AMD hardware may introduce new color capabilities. Therefore I recommend you to familiarize yourself with the hardware you are working on. Slide 6: Diagram with the three layers of the AMD display driver on Linux The AMD display driver in the kernel space: It consists of three layers, (1) the DRM/KMS framework, (2) the AMD Display Manager, and (3) the AMD Display Core. We extended the color interface exposed to userspace by leveraging existing DRM resources and connecting them using driver-specific functions for color property management. Slide 7: Three-layers diagram highlighting AMD Display Manager, DM - the layer that connects DC and DRM Bridging DC color capabilities and the DRM API required significant changes in the color management of AMD Display Manager - the Linux-dependent part that connects the AMD DC interface to the DRM/KMS framework. Slide 8: Three-layers diagram highlighting AMD Display Core, DC - the shared code The AMD DC is the OS-agnostic layer. Its code is shared between platforms and DCN versions. Examining this part helps us understand the AMD color pipeline and hardware capabilities, since the machinery for hardware settings and resource management are already there. Slide 9: Diagram of the AMD Display Core Next architecture with main elements and data flow The newest architecture for AMD display hardware is the AMD Display Core Next. Slide 10: Diagram of the AMD Display Core Next where only DPP and MPC blocks are highlighted In this architecture, two blocks have the capability to manage colors:
  • Display Pipe and Plane (DPP) - for pre-blending adjustments;
  • Multiple Pipe/Plane Combined (MPC) - for post-blending color transformations.
Let s see what we have in the DRM API for pre-blending color management. Slide 11: Blank slide with no content only a title 'Pre-blending: DRM plane' DRM plane color properties: This is the DRM color management API before blending. Nothing! Except two basic DRM plane properties: color_encoding and color_range for the input colorspace conversion, that is not covered by this work. Slide 12: Diagram with color capabilities and structures in AMD DC layer without any DRM plane color interface (before blending), only the DRM CRTC color interface for post blending In case you re not familiar with AMD shared code, what we need to do is basically draw a map and navigate there! We have some DRM color properties after blending, but nothing before blending yet. But much of the hardware programming was already implemented in the AMD DC layer, thanks to the shared code. Slide 13: Previous Diagram with a rectangle to highlight the empty space in the DRM plane interface that will be filled by AMD plane properties Still both the DRM interface and its connection to the shared code were missing. That s when the search begins! Slide 14: Color Pipeline Diagram with the plane color interface filled by AMD plane properties but without connections to AMD DC resources AMD driver-specific color pipeline: Looking at the color capabilities of the hardware, we arrive at this initial set of properties. The path wasn t exactly like that. We had many iterations and discoveries until reached to this pipeline. Slide 15: Color Pipeline Diagram connecting AMD plane degamma properties, LUT and TF, to AMD DC resources The Plane Degamma is our first driver-specific property before blending. It s used to linearize the color space from encoded values to light linear values. Slide 16: Describe plane degamma properties and hardware capabilities We can use a pre-defined transfer function or a user lookup table (in short, LUT) to linearize the color space. Pre-defined transfer functions for plane degamma are hardcoded curves that go to a specific hardware block called DPP Degamma ROM. It supports the following transfer functions: sRGB EOTF, BT.709 inverse OETF, PQ EOTF, and pure power curves Gamma 2.2, Gamma 2.4 and Gamma 2.6. We also have a one-dimensional LUT. This 1D LUT has four thousand ninety six (4096) entries, the usual 1D LUT size in the DRM/KMS. It s an array of drm_color_lut that goes to the DPP Gamma Correction block. Slide 17: Color Pipeline Diagram connecting AMD plane CTM property to AMD DC resources We also have now a color transformation matrix (CTM) for color space conversion. Slide 18: Describe plane CTM property and hardware capabilities It s a 3x4 matrix of fixed points that goes to the DPP Gamut Remap Block. Both pre- and post-blending matrices were previously gone to the same color block. We worked on detaching them to clear both paths. Now each CTM goes on its own way. Slide 19: Color Pipeline Diagram connecting AMD plane HDR multiplier property to AMD DC resources Next, the HDR Multiplier. HDR Multiplier is a factor applied to the color values of an image to increase their overall brightness. Slide 20: Describe plane HDR mult property and hardware capabilities This is useful for converting images from a standard dynamic range (SDR) to a high dynamic range (HDR). As it can range beyond [0.0, 1.0] subsequent transforms need to use the PQ(HDR) transfer functions. Slide 21: Color Pipeline Diagram connecting AMD plane shaper properties, LUT and TF, to AMD DC resources And we need a 3D LUT. But 3D LUT has a limited number of entries in each dimension, so we want to use it in a colorspace that is optimized for human vision. It means in a non-linear space. To deliver it, userspace may need one 1D LUT before 3D LUT to delinearize content and another one after to linearize content again for blending. Slide 22: Describe plane shaper properties and hardware capabilities The pre-3D-LUT curve is called Shaper curve. Unlike Degamma TF, there are no hardcoded curves for shaper TF, but we can use the AMD color module in the driver to build the following shaper curves from pre-defined coefficients. The color module combines the TF and the user LUT values into the LUT that goes to the DPP Shaper RAM block. Slide 23: Color Pipeline Diagram connecting AMD plane 3D LUT property to AMD DC resources Finally, our rockstar, the 3D LUT. 3D LUT is perfect for complex color transformations and adjustments between color channels. Slide 24: Describe plane 3D LUT property and hardware capabilities 3D LUT is also more complex to manage and requires more computational resources, as a consequence, its number of entries is usually limited. To overcome this restriction, the array contains samples from the approximated function and values between samples are estimated by tetrahedral interpolation. AMD supports 17 and 9 as the size of a single-dimension. Blue is the outermost dimension, red the innermost. Slide 25: Color Pipeline Diagram connecting AMD plane blend properties, LUT and TF, to AMD DC resources As mentioned, we need a post-3D-LUT curve to linearize the color space before blending. This is done by Blend TF and LUT. Slide 26: Describe plane blend properties and hardware capabilities Similar to shaper TF, there are no hardcoded curves for Blend TF. The pre-defined curves are the same as the Degamma block, but calculated by the color module. The resulting LUT goes to the DPP Blend RAM block. Slide 27: Color Pipeline Diagram  with all AMD plane color properties connect to AMD DC resources and links showing the conflict between plane and CRTC degamma Now we have everything connected before blending. As a conflict between plane and CRTC Degamma was inevitable, our approach doesn t accept that both are set at the same time. Slide 28: Color Pipeline Diagram connecting AMD CRTC gamma TF property to AMD DC resources We also optimized the conversion of the framebuffer to wire encoding by adding support to pre-defined CRTC Gamma TF. Slide 29: Describe CRTC gamma TF property and hardware capabilities Again, there are no hardcoded curves and TF and LUT are combined by the AMD color module. The same types of shaper curves are supported. The resulting LUT goes to the MPC Gamma RAM block. Slide 30: Color Pipeline Diagram with all AMD driver-specific color properties connect to AMD DC resources Finally, we arrived in the final version of DRM/AMD driver-specific color management pipeline. With this knowledge, you re ready to better enjoy the rainbow treasure of AMD display hardware and the world of graphics computing. Slide 31: SteamDeck/Gamescope Color Pipeline Diagram with rectangles labeling each block of the pipeline with the related AMD color property With this work, Gamescope/Steam Deck embraces the color capabilities of the AMD GPU. We highlight here how we map the Gamescope color pipeline to each AMD color block. Slide 32: Final slide. Thank you! Future works: The search for the rainbow treasure is not over! The Linux DRM subsystem contains many hidden treasures from different vendors. We want more complex color transformations and adjustments available on Linux. We also want to expose all GPU color capabilities from all hardware vendors to the Linux userspace. Thanks Joshua and Harry for this joint work and the Linux DRI community for all feedback and reviews. The amazing part of this work comes in the next talk with Joshua and The Rainbow Frogs! Any questions?
References:
  1. Slides of the talk The Rainbow Treasure Map.
  2. Youtube video of the talk The Rainbow Treasure Map.
  3. Patch series for AMD driver-specific color management properties (upstream Linux 6.8v).
  4. SteamDeck/Gamescope color management pipeline
  5. XDC 2023 website.
  6. Igalia website.

12 December 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, November 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering. Some notable fixes which were made in LTS during the month of November include the gnutls28 cryptographic library and the freerdp2 Remote Desktop Protocol client/server implementation. The gnutls28 update was prepared by LTS contributor Markus Koschany and dealt with a timing attack which could be used to compromise a cryptographic system, while the freerdp2 update was prepared by LTS contributor Tobias Frost and is the result of work spanning 3 months to deal with dozens of vulnerabilities. In addition to the many ordinary LTS tasks which were completed (CVE triage, patch backports, package updates, etc), there were several contributions by LTS contributors for the benefit of Debian stable and old-stable releases, as well as for the benefit of upstream projects. LTS contributor Abhijith PA uploaded an update of the puma package to unstable in order to fix a vulnerability in that package while LTS contributor Thosten Alteholz sponsored an upload to unstable of libde265 and himself made corresponding uploads of libde265 to Debian stable and old-stable. LTS contributor Bastien Roucari s developed patches for vulnerabilities in zbar and audiofile which were then provided to the respective upstream projects. Updates to packages in Debian stable were made by Markus Koschany to deal with security vulnerabilities and by Chris Lamb to deal with some non-security bugs. As always, the LTS strives to provide high quality updates to packages under the direct purview of the LTS team while also rendering assistance to maintainers, the stable security team, and upstream developers whenever practical.

Debian LTS contributors In November, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 7.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 7.0h to the next month.
  • Adrian Bunk did 15.0h (out of 14.0h assigned and 9.75h from previous period), thus carrying over 8.75h to the next month.
  • Anton Gladky did 10.0h (out of 9.5h assigned and 5.5h from previous period), thus carrying over 5.0h to the next month.
  • Bastien Roucari s did 16.0h (out of 18.25h assigned and 1.75h from previous period), thus carrying over 4.0h to the next month.
  • Ben Hutchings did 12.0h (out of 16.5h assigned and 12.25h from previous period), thus carrying over 16.75h to the next month.
  • Chris Lamb did 18.0h (out of 17.25h assigned and 0.75h from previous period).
  • Emilio Pozuelo Monfort did 15.5h (out of 23.5h assigned and 0.25h from previous period), thus carrying over 8.25h to the next month.
  • Guilhem Moulin did 13.0h (out of 12.0h assigned and 8.0h from previous period), thus carrying over 7.0h to the next month.
  • Lee Garrett did 14.5h (out of 16.75h assigned and 7.0h from previous period), thus carrying over 9.25h to the next month.
  • Markus Koschany did 30.0h (out of 30.0h assigned).
  • Ola Lundqvist did 6.5h (out of 8.25h assigned and 15.5h from previous period), thus carrying over 17.25h to the next month.
  • Roberto C. S nchez did 5.5h (out of 12.0h assigned), thus carrying over 6.5h to the next month.
  • Santiago Ruano Rinc n did 3.25h (out of 13.62h assigned and 2.375h from previous period), thus carrying over 12.745h to the next month.
  • Sean Whitton did 3.25h (out of 10.0h assigned), thus carrying over 6.75h to the next month.
  • Sylvain Beucler did 10.0h (out of 13.5h assigned and 10.25h from previous period), thus carrying over 13.75h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).
  • Utkarsh Gupta did 0.0h (out of 6.0h assigned and 17.75h from previous period), thus carrying over 23.75h to the next month.

Evolution of the situation In November, we have released 35 DLAs.

Thanks to our sponsors Sponsors that joined recently are in bold.

29 November 2023

Jonathan Dowland: Useful vim plugins: AnsiEsc

Sometimes I have to pore over long debugging logs which have originally been written out to a terminal and marked up with colour or formatting via ANSI escape codes. The formatting definitely makes reading them easier, but I want to read them in Vim, rather than a terminal, and (out of the box) Vim doesn't render the formatting. Cue AnsiEsc.vim: an OG Vim script1 that translates some ANSI escape codes in particular some colour specifying ones into Vim syntax highlighting. This makes viewing and navigating around multi-MiB console log files much nicer.

  1. AnsiEsc is old enough to have been distributed as a script, and then as a "vimball", an invention of the same author to make installing Vim scripts easier. It pre-dates the current fashion for plugins, but someone else has updated it and repackaged it as a plugin. I haven't tried that out.

21 November 2023

Mike Hommey: How I (kind of) killed Mercurial at Mozilla

Did you hear the news? Firefox development is moving from Mercurial to Git. While the decision is far from being mine, and I was barely involved in the small incremental changes that ultimately led to this decision, I feel I have to take at least some responsibility. And if you are one of those who would rather use Mercurial than Git, you may direct all your ire at me. But let's take a step back and review the past 25 years leading to this decision. You'll forgive me for skipping some details and any possible inaccuracies. This is already a long post, while I could have been more thorough, even I think that would have been too much. This is also not an official Mozilla position, only my personal perception and recollection as someone who was involved at times, but mostly an observer from a distance. From CVS to DVCS From its release in 1998, the Mozilla source code was kept in a CVS repository. If you're too young to know what CVS is, let's just say it's an old school version control system, with its set of problems. Back then, it was mostly ubiquitous in the Open Source world, as far as I remember. In the early 2000s, the Subversion version control system gained some traction, solving some of the problems that came with CVS. Incidentally, Subversion was created by Jim Blandy, who now works at Mozilla on completely unrelated matters. In the same period, the Linux kernel development moved from CVS to Bitkeeper, which was more suitable to the distributed nature of the Linux community. BitKeeper had its own problem, though: it was the opposite of Open Source, but for most pragmatic people, it wasn't a real concern because free access was provided. Until it became a problem: someone at OSDL developed an alternative client to BitKeeper, and licenses of BitKeeper were rescinded for OSDL members, including Linus Torvalds (they were even prohibited from purchasing one). Following this fiasco, in April 2005, two weeks from each other, both Git and Mercurial were born. The former was created by Linus Torvalds himself, while the latter was developed by Olivia Mackall, who was a Linux kernel developer back then. And because they both came out of the same community for the same needs, and the same shared experience with BitKeeper, they both were similar distributed version control systems. Interestingly enough, several other DVCSes existed: In this landscape, the major difference Git was making at the time was that it was blazing fast. Almost incredibly so, at least on Linux systems. That was less true on other platforms (especially Windows). It was a game-changer for handling large codebases in a smooth manner. Anyways, two years later, in 2007, Mozilla decided to move its source code not to Bzr, not to Git, not to Subversion (which, yes, was a contender), but to Mercurial. The decision "process" was laid down in two rather colorful blog posts. My memory is a bit fuzzy, but I don't recall that it was a particularly controversial choice. All of those DVCSes were still young, and there was no definite "winner" yet (GitHub hadn't even been founded). It made the most sense for Mozilla back then, mainly because the Git experience on Windows still wasn't there, and that mattered a lot for Mozilla, with its diverse platform support. As a contributor, I didn't think much of it, although to be fair, at the time, I was mostly consuming the source tarballs. Personal preferences Digging through my archives, I've unearthed a forgotten chapter: I did end up setting up both a Mercurial and a Git mirror of the Firefox source repository on alioth.debian.org. Alioth.debian.org was a FusionForge-based collaboration system for Debian developers, similar to SourceForge. It was the ancestor of salsa.debian.org. I used those mirrors for the Debian packaging of Firefox (cough cough Iceweasel). The Git mirror was created with hg-fast-export, and the Mercurial mirror was only a necessary step in the process. By that time, I had converted my Subversion repositories to Git, and switched off SVK. Incidentally, I started contributing to Git around that time as well. I apparently did this not too long after Mozilla switched to Mercurial. As a Linux user, I think I just wanted the speed that Mercurial was not providing. Not that Mercurial was that slow, but the difference between a couple seconds and a couple hundred milliseconds was a significant enough difference in user experience for me to prefer Git (and Firefox was not the only thing I was using version control for) Other people had also similarly created their own mirror, or with other tools. But none of them were "compatible": their commit hashes were different. Hg-git, used by the latter, was putting extra information in commit messages that would make the conversion differ, and hg-fast-export would just not be consistent with itself! My mirror is long gone, and those have not been updated in more than a decade. I did end up using Mercurial, when I got commit access to the Firefox source repository in April 2010. I still kept using Git for my Debian activities, but I now was also using Mercurial to push to the Mozilla servers. I joined Mozilla as a contractor a few months after that, and kept using Mercurial for a while, but as a, by then, long time Git user, it never really clicked for me. It turns out, the sentiment was shared by several at Mozilla. Git incursion In the early 2010s, GitHub was becoming ubiquitous, and the Git mindshare was getting large. Multiple projects at Mozilla were already entirely hosted on GitHub. As for the Firefox source code base, Mozilla back then was kind of a Wild West, and engineers being engineers, multiple people had been using Git, with their own inconvenient workflows involving a local Mercurial clone. The most popular set of scripts was moz-git-tools, to incorporate changes in a local Git repository into the local Mercurial copy, to then send to Mozilla servers. In terms of the number of people doing that, though, I don't think it was a lot of people, probably a few handfuls. On my end, I was still keeping up with Mercurial. I think at that time several engineers had their own unofficial Git mirrors on GitHub, and later on Ehsan Akhgari provided another mirror, with a twist: it also contained the full CVS history, which the canonical Mercurial repository didn't have. This was particularly interesting for engineers who needed to do some code archeology and couldn't get past the 2007 cutoff of the Mercurial repository. I think that mirror ultimately became the official-looking, but really unofficial, mozilla-central repository on GitHub. On a side note, a Mercurial repository containing the CVS history was also later set up, but that didn't lead to something officially supported on the Mercurial side. Some time around 2011~2012, I started to more seriously consider using Git for work myself, but wasn't satisfied with the workflows others had set up for themselves. I really didn't like the idea of wasting extra disk space keeping a Mercurial clone around while using a Git mirror. I wrote a Python script that would use Mercurial as a library to access a remote repository and produce a git-fast-import stream. That would allow the creation of a git repository without a local Mercurial clone. It worked quite well, but it was not able to incrementally update. Other, more complete tools existed already, some of which I mentioned above. But as time was passing and the size and depth of the Mercurial repository was growing, these tools were showing their limits and were too slow for my taste, especially for the initial clone. Boot to Git In the same time frame, Mozilla ventured in the Mobile OS sphere with Boot to Gecko, later known as Firefox OS. What does that have to do with version control? The needs of third party collaborators in the mobile space led to the creation of what is now the gecko-dev repository on GitHub. As I remember it, it was challenging to create, but once it was there, Git users could just clone it and have a working, up-to-date local copy of the Firefox source code and its history... which they could already have, but this was the first officially supported way of doing so. Coincidentally, Ehsan's unofficial mirror was having trouble (to the point of GitHub closing the repository) and was ultimately shut down in December 2013. You'll often find comments on the interwebs about how GitHub has become unreliable since the Microsoft acquisition. I can't really comment on that, but if you think GitHub is unreliable now, rest assured that it was worse in its beginning. And its sustainability as a platform also wasn't a given, being a rather new player. So on top of having this official mirror on GitHub, Mozilla also ventured in setting up its own Git server for greater control and reliability. But the canonical repository was still the Mercurial one, and while Git users now had a supported mirror to pull from, they still had to somehow interact with Mercurial repositories, most notably for the Try server. Git slowly creeping in Firefox build tooling Still in the same time frame, tooling around building Firefox was improving drastically. For obvious reasons, when version control integration was needed in the tooling, Mercurial support was always a no-brainer. The first explicit acknowledgement of a Git repository for the Firefox source code, other than the addition of the .gitignore file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py, from September 2012. Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format command that would apply clang-format-diff to the output from hg diff. Obviously, running hg diff on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014. A year later, when the initial implementation of mach artifact was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact was eventually added 14 months later, in March 2016. From gecko-dev to git-cinnabar Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands. Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now). Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence. Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch). One Firefox repository to rule them all For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them. And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though. I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such. In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary. 7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository. Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated. Hosting is simple, right? Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones). And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches. Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication. Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar). Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is. "But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way. And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while). Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016. The growing difficulty of maintaining the status quo Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin. Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things). On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option. But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one. Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours). I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem. Taking the leap I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective. Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more. It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different. You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved. And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?). Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git). Heck, even Microsoft moved to Git! With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems. But... GitHub? I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far. After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those). Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that). I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened). "But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion. At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then. So, what's next? The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not. While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now: As there is no one-size-fits-all workflow, I won't tell you how to organize yourself from there. I'll just say this: if you know the Mercurial sha1s of your previous local work, you can create branches for them with:
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post. If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them. Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far). What about git-cinnabar? With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained. I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches). Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there. So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either. Final words That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;) So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change. To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire. And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.

Russ Allbery: Review: Thud!

Review: Thud!, by Terry Pratchett
Series: Discworld #34
Publisher: Harper
Copyright: October 2005
Printing: November 2014
ISBN: 0-06-233498-0
Format: Mass market
Pages: 434
Thud! is the 34th Discworld novel and the seventh Watch novel. It is partly a sequel to The Fifth Elephant, partly a sequel to Night Watch, and references many of the previous Watch novels. This is not a good place to start. Dwarfs and trolls have a long history of conflict, as one might expect between a race of creatures who specialize in mining and a race of creatures whose vital organs are sometimes the targets of that mining. The first battle of Koom Valley was the place where that enmity was made concrete and given a symbol. Now that there are large dwarf and troll populations in Ankh-Morpork, the upcoming anniversary of that battle is the excuse for rising tensions. Worse, Grag Hamcrusher, a revered deep-down dwarf and a dwarf supremacist, is giving incendiary speeches about killing all trolls and appears to be tunneling under the city. Then whispers run through the city's dwarfs that Hamcrusher has been murdered by a troll. Vimes has no patience for racial tensions, or for the inspection of the Watch by one of Vetinari's excessively competent clerks, or the political pressure to add a vampire to the Watch over his prejudiced objections. He was already grumpy before the murder and is in absolutely no mood to be told by deep-down dwarfs who barely believe that humans exist that the murder of a dwarf underground is no affair of his. Meanwhile, The Battle of Koom Valley by Methodia Rascal has been stolen from the Ankh-Morpork Royal Art Museum, an impressive feat given that the painting is ten feet high and fifty feet long. It was painted in impressive detail by a madman who thought he was a chicken, and has been the spark for endless theories about clues to some great treasure or hidden knowledge, culminating in the conspiratorial book Koom Valley Codex. But the museum prides itself on allowing people to inspect and photograph the painting to their heart's content and was working on a new room to display it. It's not clear why someone would want to steal it, but Colon and Nobby are on the case. This was a good time to read this novel. Sadly, the same could be said of pretty much every year since it was written. "Thud" in the title is a reference to Hamcrusher's murder, which was supposedly done by a troll club that was found nearby, but it's also a reference to a board game that we first saw in passing in Going Postal. We find out a lot more about Thud in this book. It's an asymmetric two-player board game that simulates a stylized battle between dwarf and troll forces, with one player playing the trolls and the other playing the dwarfs. The obvious comparison is to chess, but a better comparison would be to the old Steve Jackson Games board game Ogre, which also featured asymmetric combat mechanics. (I'm sure there are many others.) This board game will become quite central to the plot of Thud! in ways that I thought were ingenious. I thought this was one of Pratchett's best-plotted books to date. There are a lot of things happening, involving essentially every member of the Watch that we've met in previous books, and they all matter and I was never confused by how they fit together. This book is full of little callbacks and apparently small things that become important later in a way that I found delightful to read, down to the children's book that Vimes reads to his son and that turns into the best scene of the book. At this point in my Discworld read-through, I can see why the Watch books are considered the best sub-series. It feels like Pratchett kicks the quality of writing up a notch when he has Vimes as a protagonist. In several books now, Pratchett has created a villain by taking some human characteristic and turning it into an external force that acts on humans. (See, for instance the Gonne in Men at Arms, or the hiver in A Hat Full of Sky.) I normally do not like this plot technique, both because I think it lets humans off the hook in a way that cheapens the story and because this type of belief has a long and bad reputation in religions where it is used to dodge personal responsibility and dehumanize one's enemies. When another of those villains turned up in this book, I was dubious. But I think Pratchett pulls off this type of villain as well here as I've seen it done. He lifts up a facet of humanity to let the reader get a better view, but somehow makes it explicit that this is concretized metaphor. This force is something people create and feed and choose and therefore are responsible for. The one sour note that I do have to complain about is that Pratchett resorts to some cheap and annoying "men are from Mars, women are from Venus" nonsense, mostly around Nobby's subplot but in a few other places (Sybil, some of Angua's internal monologue) as well. It's relatively minor, and I might let it pass without grumbling in other books, but usually Pratchett is better on gender than this. I expected better and it got under my skin. Otherwise, though, this was a quietly excellent book. It doesn't have the emotional gut punch of Night Watch, but the plotting is superb and the pacing is a significant improvement over The Fifth Elephant. The parody is of The Da Vinci Code, which is both more interesting than Pratchett's typical movie parodies and delightfully subtle. We get more of Sybil being a bad-ass, which I am always here for. There's even some lovely world-building in the form of dwarven Devices. I love how Pratchett has built Vimes up into one of the most deceptively heroic figures on Discworld, but also shows all of the support infrastructure that ensures Vimes maintain his principles. On the surface, Thud! has a lot in common with Vimes's insistently moral stance in Jingo, but here it is more obvious how Vimes's morality happens in part because his wife, his friends, and his boss create the conditions for it to thrive. Highly recommended to anyone who has gotten this far. Rating: 9 out of 10

12 November 2023

Petter Reinholdtsen: New and improved sqlcipher in Debian for accessing Signal database

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance. The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition. Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:
/usr/bin/jq -r '."key"' ~/.config/Signal/config.json
Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:
% sqlcipher ~/.config/Signal/sql/db.sqlite
sqlite> PRAGMA key = "x'secretkey'";
sqlite> .schema
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE conversations(
      id STRING PRIMARY KEY ASC,
      json TEXT,
      active_at INTEGER,
      type STRING,
      members TEXT,
      name TEXT,
      profileName TEXT
    , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER);
CREATE TABLE identityKeys(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE items(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE sessions(
      id TEXT PRIMARY KEY,
      conversationId TEXT,
      json TEXT
    , ourServiceId STRING, serviceId STRING);
CREATE TABLE attachment_downloads(
    id STRING primary key,
    timestamp INTEGER,
    pending INTEGER,
    json TEXT
  );
CREATE TABLE sticker_packs(
    id TEXT PRIMARY KEY,
    key TEXT NOT NULL,
    author STRING,
    coverStickerId INTEGER,
    createdAt INTEGER,
    downloadAttempts INTEGER,
    installedAt INTEGER,
    lastUsed INTEGER,
    status STRING,
    stickerCount INTEGER,
    title STRING
  , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync
      INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE stickers(
    id INTEGER NOT NULL,
    packId TEXT NOT NULL,
    emoji STRING,
    height INTEGER,
    isCoverOnly INTEGER,
    lastUsed INTEGER,
    path STRING,
    width INTEGER,
    PRIMARY KEY (id, packId),
    CONSTRAINT stickers_fk
      FOREIGN KEY (packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE sticker_references(
    messageId STRING,
    packId TEXT,
    CONSTRAINT sticker_references_fk
      FOREIGN KEY(packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE emojis(
    shortName TEXT PRIMARY KEY,
    lastUsage INTEGER
  );
CREATE TABLE messages(
        rowid INTEGER PRIMARY KEY ASC,
        id STRING UNIQUE,
        json TEXT,
        readStatus INTEGER,
        expires_at INTEGER,
        sent_at INTEGER,
        schemaVersion INTEGER,
        conversationId STRING,
        received_at INTEGER,
        source STRING,
        hasAttachments INTEGER,
        hasFileAttachments INTEGER,
        hasVisualMediaAttachments INTEGER,
        expireTimer INTEGER,
        expirationStartTimestamp INTEGER,
        type STRING,
        body TEXT,
        messageTimer INTEGER,
        messageTimerStart INTEGER,
        messageTimerExpiresAt INTEGER,
        isErased INTEGER,
        isViewOnce INTEGER,
        sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER
        GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER
        GENERATED ALWAYS AS (
          json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1
        ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT
        GENERATED ALWAYS
        AS (ifnull(
          expirationStartTimestamp + (expireTimer * 1000),
          9007199254740991
        )), shouldAffectActivity INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), shouldAffectPreview INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), isUserInitiatedMessage INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'group-v2-change',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER
        GENERATED ALWAYS AS (
          type IS 'group-v2-change' AND
          json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND
          json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND
          json_extract(json, '$.groupV2Change.from') IS NOT NULL AND
          json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci')
        ), isGroupLeaveEventFromOther INTEGER
        GENERATED ALWAYS AS (
          isGroupLeaveEvent IS 1
          AND
          isChangeCreatedByUs IS 0
        ), callId TEXT
        GENERATED ALWAYS AS (
          json_extract(json, '$.callId')
        ));
CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample);
CREATE TABLE jobs(
        id TEXT PRIMARY KEY,
        queueType TEXT STRING NOT NULL,
        timestamp INTEGER NOT NULL,
        data STRING TEXT
      );
CREATE TABLE reactions(
        conversationId STRING,
        emoji STRING,
        fromId STRING,
        messageReceivedAt INTEGER,
        targetAuthorAci STRING,
        targetTimestamp INTEGER,
        unread INTEGER
      , messageId STRING);
CREATE TABLE senderKeys(
        id TEXT PRIMARY KEY NOT NULL,
        senderId TEXT NOT NULL,
        distributionId TEXT NOT NULL,
        data BLOB NOT NULL,
        lastUpdatedDate NUMBER NOT NULL
      );
CREATE TABLE unprocessed(
        id STRING PRIMARY KEY ASC,
        timestamp INTEGER,
        version INTEGER,
        attempts INTEGER,
        envelope TEXT,
        decrypted TEXT,
        source TEXT,
        serverTimestamp INTEGER,
        sourceServiceId STRING
      , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER);
CREATE TABLE sendLogPayloads(
        id INTEGER PRIMARY KEY ASC,
        timestamp INTEGER NOT NULL,
        contentHint INTEGER NOT NULL,
        proto BLOB NOT NULL
      , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sendLogRecipients(
        payloadId INTEGER NOT NULL,
        recipientServiceId STRING NOT NULL,
        deviceId INTEGER NOT NULL,
        PRIMARY KEY (payloadId, recipientServiceId, deviceId),
        CONSTRAINT sendLogRecipientsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE sendLogMessageIds(
        payloadId INTEGER NOT NULL,
        messageId STRING NOT NULL,
        PRIMARY KEY (payloadId, messageId),
        CONSTRAINT sendLogMessageIdsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE preKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE signedPreKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE badges(
        id TEXT PRIMARY KEY,
        category TEXT NOT NULL,
        name TEXT NOT NULL,
        descriptionTemplate TEXT NOT NULL
      );
CREATE TABLE badgeImageFiles(
        badgeId TEXT REFERENCES badges(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        'order' INTEGER NOT NULL,
        url TEXT NOT NULL,
        localPath TEXT,
        theme TEXT NOT NULL
      );
CREATE TABLE storyReads (
        authorId STRING NOT NULL,
        conversationId STRING NOT NULL,
        storyId STRING NOT NULL,
        storyReadDate NUMBER NOT NULL,
        PRIMARY KEY (authorId, storyId)
      );
CREATE TABLE storyDistributions(
        id STRING PRIMARY KEY NOT NULL,
        name TEXT,
        senderKeyInfoJson STRING
      , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER);
CREATE TABLE storyDistributionMembers(
        listId STRING NOT NULL REFERENCES storyDistributions(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        serviceId STRING NOT NULL,
        PRIMARY KEY (listId, serviceId)
      );
CREATE TABLE uninstalled_sticker_packs (
        id STRING NOT NULL PRIMARY KEY,
        uninstalledAt NUMBER NOT NULL,
        storageID STRING,
        storageVersion NUMBER,
        storageUnknownFields BLOB,
        storageNeedsSync INTEGER NOT NULL
      );
CREATE TABLE groupCallRingCancellations(
        ringId INTEGER PRIMARY KEY,
        createdAt INTEGER NOT NULL
      );
CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID;
CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0);
CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID;
CREATE TABLE edited_messages(
        messageId STRING REFERENCES messages(id)
          ON DELETE CASCADE,
        sentAt INTEGER,
        readStatus INTEGER
      , conversationId STRING);
CREATE TABLE mentions (
        messageId REFERENCES messages(id) ON DELETE CASCADE,
        mentionAci STRING,
        start INTEGER,
        length INTEGER
      );
CREATE TABLE kyberPreKeys(
        id STRING PRIMARY KEY NOT NULL,
        json TEXT NOT NULL, ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE callsHistory (
        callId TEXT PRIMARY KEY,
        peerId TEXT NOT NULL, -- conversation id (legacy)   uuid   groupId   roomId
        ringerId TEXT DEFAULT NULL, -- ringer uuid
        mode TEXT NOT NULL, -- enum "Direct"   "Group"
        type TEXT NOT NULL, -- enum "Audio"   "Video"   "Group"
        direction TEXT NOT NULL, -- enum "Incoming"   "Outgoing
        -- Direct: enum "Pending"   "Missed"   "Accepted"   "Deleted"
        -- Group: enum "GenericGroupCall"   "OutgoingRing"   "Ringing"   "Joined"   "Missed"   "Declined"   "Accepted"   "Deleted"
        status TEXT NOT NULL,
        timestamp INTEGER NOT NULL,
        UNIQUE (callId, peerId) ON CONFLICT FAIL
      );
[ dropped all indexes to save space in this blog post ]
CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages
      WHEN
        new.body IS NOT NULL AND new.isViewOnce = 1
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
      END;
CREATE TRIGGER messages_on_insert AFTER INSERT ON messages
      WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        DELETE FROM sendLogPayloads WHERE id IN (
          SELECT payloadId FROM sendLogMessageIds
          WHERE messageId = old.id
        );
        DELETE FROM reactions WHERE rowid IN (
          SELECT rowid FROM reactions
          WHERE messageId = old.id
        );
        DELETE FROM storyReads WHERE storyId = old.storyId;
      END;
CREATE VIRTUAL TABLE messages_fts USING fts5(
        body,
        tokenize = 'signal_tokenizer'
      );
CREATE TRIGGER messages_on_update AFTER UPDATE ON messages
      WHEN
        (new.body IS NULL OR old.body IS NOT new.body) AND
         new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages
      BEGIN
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages
      BEGIN
        DELETE FROM mentions WHERE messageId = new.id;
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
sqlite>
Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

7 November 2023

Jonathan Dowland: gitsigns (useful neovim plugins)

gitsigns is a Neovim plugin which adds a wonderfully subtle colour annotation in the left-hand gutter to reflect changes in the buffer since the last git commit1. My long-term habit with Vim and Git is to frequently background Vim (^Z) to invoke the git command directly and then foreground Vim again (fg). Over the last few years I've been trying more and more to call vim-fugitive from within Vim instead. (I still do rebases and most merges the old-fashioned way). For the most part Gitsigns is a nice passive addition to that, but it can also do a lot of useful things that Fugitive also does. Previewing changed hunks in a little floating window, in particular when resolving an awkward merge conflict, is very handy.
An example screenshot of the Gitsigns plugin
The above picture shows it in action. I've changed two lines and added another three in the top-left buffer. Gitsigns tries to choose colours from the currently active colour scheme (in this case, tender)

  1. by default Gitsigns shows changes since the last commit, as git diff does, but you can easily switch out the base from which it compares.

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 2)

TL;DR: This blog post explores the color capabilities of AMD hardware and how they are exposed to userspace through driver-specific properties. It discusses the different color blocks in the AMD Display Core Next (DCN) pipeline and their capabilities, such as predefined transfer functions, 1D and 3D lookup tables (LUTs), and color transformation matrices (CTMs). It also highlights the differences in AMD HW blocks for pre and post-blending adjustments, and how these differences are reflected in the available driver-specific properties. Overall, this blog post provides a comprehensive overview of the color capabilities of AMD hardware and how they can be controlled by userspace applications through driver-specific properties. This information is valuable for anyone who wants to develop applications that can take advantage of the AMD color management pipeline. Get a closer look at each hardware block s capabilities, unlock a wealth of knowledge about AMD display hardware, and enhance your understanding of graphics and visual computing. Stay tuned for future developments as we embark on a quest for GPU color capabilities in the ever-evolving realm of rainbow treasures.
Operating Systems can use the power of GPUs to ensure consistent color reproduction across graphics devices. We can use GPU-accelerated color management to manage the diversity of color profiles, do color transformations to convert between High-Dynamic-Range (HDR) and Standard-Dynamic-Range (SDR) content and color enhacements for wide color gamut (WCG). However, to make use of GPU display capabilities, we need an interface between userspace and the kernel display drivers that is currently absent in the Linux/DRM KMS API. In the previous blog post I presented how we are expanding the Linux/DRM color management API to expose specific properties of AMD hardware. Now, I ll guide you to the color features for the Linux/AMD display driver. We embark on a journey through DRM/KMS, AMD Display Manager, and AMD Display Core and delve into the color blocks to uncover the secrets of color manipulation within AMD hardware. Here we ll talk less about the color tools and more about where to find them in the hardware. We resort to driver-specific properties to reach AMD hardware blocks with color capabilities. These blocks display features like predefined transfer functions, color transformation matrices, and 1-dimensional (1D LUT) and 3-dimensional lookup tables (3D LUT). Here, we will understand how these color features are strategically placed into color blocks both before and after blending in Display Pipe and Plane (DPP) and Multiple Pipe/Plane Combined (MPC) blocks. That said, welcome back to the second part of our thrilling journey through AMD s color management realm!

AMD Display Driver in the Linux/DRM Subsystem: The Journey In my 2022 XDC talk I m not an AMD expert, but , I briefly explained the organizational structure of the Linux/AMD display driver where the driver code is bifurcated into a Linux-specific section and a shared-code portion. To reveal AMD s color secrets through the Linux kernel DRM API, our journey led us through these layers of the Linux/AMD display driver s software stack. It includes traversing the DRM/KMS framework, the AMD Display Manager (DM), and the AMD Display Core (DC) [1]. The DRM/KMS framework provides the atomic API for color management through KMS properties represented by struct drm_property. We extended the color management interface exposed to userspace by leveraging existing resources and connecting them with driver-specific functions for managing modeset properties. On the AMD DC layer, the interface with hardware color blocks is established. The AMD DC layer contains OS-agnostic components that are shared across different platforms, making it an invaluable resource. This layer already implements hardware programming and resource management, simplifying the external developer s task. While examining the DC code, we gain insights into the color pipeline and capabilities, even without direct access to specifications. Additionally, AMD developers provide essential support by answering queries and reviewing our work upstream. The primary challenge involved identifying and understanding relevant AMD DC code to configure each color block in the color pipeline. However, the ultimate goal was to bridge the DC color capabilities with the DRM API. For this, we changed the AMD DM, the OS-dependent layer connecting the DC interface to the DRM/KMS framework. We defined and managed driver-specific color properties, facilitated the transport of user space data to the DC, and translated DRM features and settings to the DC interface. Considerations were also made for differences in the color pipeline based on hardware capabilities.

Exploring Color Capabilities of the AMD display hardware Now, let s dive into the exciting realm of AMD color capabilities, where a abundance of techniques and tools await to make your colors look extraordinary across diverse devices. First, we need to know a little about the color transformation and calibration tools and techniques that you can find in different blocks of the AMD hardware. I borrowed some images from [2] [3] [4] to help you understand the information.

Predefined Transfer Functions (Named Fixed Curves): Transfer functions serve as the bridge between the digital and visual worlds, defining the mathematical relationship between digital color values and linear scene/display values and ensuring consistent color reproduction across different devices and media. You can learn more about curves in the chapter GPU Gems 3 - The Importance of Being Linear by Larry Gritz and Eugene d Eon. ITU-R 2100 introduces three main types of transfer functions:
  • OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
  • EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
  • OOTF: opto-optical transfer function, which has the role of applying the rendering intent .
AMD s display driver supports the following pre-defined transfer functions (aka named fixed curves):
  • Linear/Unity: linear/identity relationship between pixel value and luminance value;
  • Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
  • sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
  • BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
  • PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.

1D LUTs (1-dimensional Lookup Table): A 1D LUT is a versatile tool, defining a one-dimensional color transformation based on a single parameter. It s very well explained by Jeremy Selan at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations It enables adjustments to color, brightness, and contrast, making it ideal for fine-tuning. In the Linux AMD display driver, the atomic API offers a 1D LUT with 4096 entries and 8-bit depth, while legacy gamma uses a size of 256.

3D LUTs (3-dimensional Lookup Table): These tables work in three dimensions red, green, and blue. They re perfect for complex color transformations and adjustments between color channels. It s also more complex to manage and require more computational resources. Jeremy also explains 3D LUT at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations

CTM (Color Transformation Matrices): Color transformation matrices facilitate the transition between different color spaces, playing a crucial role in color space conversion.

HDR Multiplier: HDR multiplier is a factor applied to the color values of an image to increase their overall brightness.

AMD Color Capabilities in the Hardware Pipeline First, let s take a closer look at the AMD Display Core Next hardware pipeline in the Linux kernel documentation for AMDGPU driver - Display Core Next In the AMD Display Core Next hardware pipeline, we encounter two hardware blocks with color capabilities: the Display Pipe and Plane (DPP) and the Multiple Pipe/Plane Combined (MPC). The DPP handles color adjustments per plane before blending, while the MPC engages in post-blending color adjustments. In short, we expect DPP color capabilities to match up with DRM plane properties, and MPC color capabilities to play nice with DRM CRTC properties. Note: here s the catch there are some DRM CRTC color transformations that don t have a corresponding AMD MPC color block, and vice versa. It s like a puzzle, and we re here to solve it!

AMD Color Blocks and Capabilities We can finally talk about the color capabilities of each AMD color block. As it varies based on the generation of hardware, let s take the DCN3+ family as reference. What s possible to do before and after blending depends on hardware capabilities describe in the kernel driver by struct dpp_color_caps and struct mpc_color_caps. The AMD Steam Deck hardware provides a tangible example of these capabilities. Therefore, we take SteamDeck/DCN301 driver as an example and look at the Color pipeline capabilities described in the file: driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE.
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion  (CSC) matrix.
dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT).  Gamma correction  is the new one.
dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use)
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support
dc->caps.color.dpp.post_csc = 1; // CSC matrix
dc->caps.color.dpp.gamma_corr = 1; // New  Gamma Correction  block for degamma user LUT;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve. 
dc->caps.color.dpp.ogam_ram = 1; //  Blend Gamma  block for custom curve just after blending
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported)
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve)
dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma.
// No pre-defined TF supported for regamma.
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.
I included some inline comments in each element of the color caps to quickly describe them, but you can find the same information in the Linux kernel documentation. See more in struct dpp_color_caps, struct mpc_color_caps and struct rom_curve_caps. Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more about mapping driver-specific properties to corresponding color blocks.

DPP Color Pipeline: Before Blending (Per Plane) Let s explore the capabilities of DPP blocks and what you can achieve with a color block. The very first thing to pay attention is the display architecture of the display hardware: previously AMD uses a display architecture called DCE
  • Display and Compositing Engine, but newer hardware follows DCN - Display Core Next.
The architectute is described by: dc->caps.color.dpp.dcn_arch

AMD Plane Degamma: TF and 1D LUT Described by: dc->caps.color.dpp.dgam_ram, dc->caps.color.dpp.dgam_rom_caps,dc->caps.color.dpp.gamma_corr AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It is utilized to transition from scanout/encoded values to linear values for arithmetic operations. Plane Degamma supports both pre-defined transfer functions and 1D LUTs, depending on the hardware generation. DCN2 and older families handle both types of curve in the Degamma RAM block (dc->caps.color.dpp.dgam_ram); DCN3+ separate hardcoded curves and 1D LUT into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps) and Gamma correction block (dc->caps.color.dpp.gamma_corr), respectively. Pre-defined transfer functions:
  • they are hardcoded curves (read-only memory - ROM);
  • supported curves: sRGB EOTF, BT.709 inverse OETF, PQ EOTF and HLG OETF, Gamma 2.2, Gamma 2.4 and Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. Setting TF = Identity/Default and LUT as NULL means bypass. References:

AMD Plane 3x4 CTM (Color Transformation Matrix) AMD Plane CTM data goes to the DPP Gamut Remap block, supporting a 3x4 fixed point (s31.32) matrix for color space conversions. The data is interpreted as a struct drm_color_ctm_3x4. Setting NULL means bypass. References:

AMD Plane Shaper: TF + 1D LUT Described by: dc->caps.color.dpp.hw_3d_lut The Shaper block fine-tunes color adjustments before applying the 3D LUT, optimizing the use of the limited entries in each dimension of the 3D LUT. On AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for delinearizing and/or normalizing the color space before applying a 3D LUT, so this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut means support for both shaper 1D LUT and 3D LUT. Pre-defined transfer function enables delinearizing content with or without shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper curves go from linear values to encoded values. If we are already in a non-linear space and/or don t need to normalize values, we can set a Identity TF for shaper that works similar to bypass and is also the default TF value. Pre-defined transfer functions:
  • there is no DPP Shaper ROM. Curves are calculated by AMD color modules. Check calculate_curve() function in the file amd/display/modules/color/color_gamma.c.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting Plane Shaper TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT as NULL works as bypass. References:

AMD Plane 3D LUT Described by: dc->caps.color.dpp.hw_3d_lut The 3D LUT in the DPP block facilitates complex color transformations and adjustments. 3D LUT is a three-dimensional array where each element is an RGB triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut describe if DPP 3D LUT is supported. The AMD driver-specific property advertise the size of a single dimension via LUT3D_SIZE property. Plane 3D LUT is a blog property where the data is interpreted as an array of struct drm_color_lut elements and the number of entries is LUT3D_SIZE cubic. The array contains samples from the approximated function. Values between samples are estimated by tetrahedral interpolation The array is accessed with three indices, one for each input dimension (color channel), blue being the outermost dimension, red the innermost. This distribution is better visualized when examining the code in [RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+	for (nib = 0; nib < 17; nib++)  
+		for (nig = 0; nig < 17; nig++)  
+			for (nir = 0; nir < 17; nir++)  
+				ind_lut = 3 * (nib + 17*nig + 289*nir);
+
+				rgb_area[ind].red = rgb_lib[ind_lut + 0];
+				rgb_area[ind].green = rgb_lib[ind_lut + 1];
+				rgb_area[ind].blue = rgb_lib[ind_lut + 2];
+				ind++;
+			 
+		 
+	 
In our driver-specific approach we opted to advertise it s behavior to the userspace instead of implicitly dealing with it in the kernel driver. AMD s hardware supports 3D LUTs with 17-size or 9-size (4913 and 729 entries respectively), and you can choose between 10-bit or 12-bit. In the current driver-specific work we focus on enabling only 17-size 12-bit 3D LUT, as in [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support:
+		/* Stride and bit depth are not programmable by API yet.
+		 * Therefore, only supports 17x17x17 3D LUT (12-bit).
+		 */
+		lut->lut_3d.use_tetrahedral_9 = false;
+		lut->lut_3d.use_12bits = true;
+		lut->state.bits.initialized = 1;
+		__drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d,
+					lut->lut_3d.use_tetrahedral_9,
+					MAX_COLOR_3DLUT_BITDEPTH);
A refined control of 3D LUT parameters should go through a follow-up version or generic API. Setting 3D LUT to NULL means bypass. References:

AMD Plane Blend/Out Gamma: TF + 1D LUT Described by: dc->caps.color.dpp.ogam_ram The Blend/Out Gamma block applies the final touch-up before blending, allowing users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are sandwich the 3D LUT. So, if we don t need 3D LUT transformations, we may want to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend. Pre-defined transfer function:
  • there is no DPP Blend ROM. Curves are calculated by AMD color modules;
  • supported curves: Identity, sRGB EOTF, BT.709 inverse OETF, PQ EOTF, HLG inverse OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. If plane_blend_tf_property != Identity TF, AMD color module will combine the user LUT values with pre-defined TF into the LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

MPC Color Pipeline: After Blending (Per CRTC)

DRM CRTC Degamma 1D LUT The degamma lookup table (LUT) for converting framebuffer pixel data before apply the color conversion matrix. The data is interpreted as an array of struct drm_color_lut elements. Setting NULL means bypass. Not really supported. The driver is currently reusing the DPP degamma LUT block (dc->caps.color.dpp.dgam_ram and dc->caps.color.dpp.gamma_corr) for supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32] drm/amd/display: reject atomic commit if setting both plane and CRTC degamma.

DRM CRTC 3x3 CTM Described by: dc->caps.color.mpc.gamut_remap It sets the current transformation matrix (CTM) apply to pixel data after the lookup through the degamma LUT and before the lookup through the gamma LUT. The data is interpreted as a struct drm_color_ctm. Setting NULL means bypass.

DRM CRTC Gamma 1D LUT + AMD CRTC Gamma TF Described by: dc->caps.color.mpc.ogam_ram After all that, you might still want to convert the content to wire encoding. No worries, in addition to DRM CRTC 1D LUT, we ve got a AMD CRTC gamma transfer function (TF) to make it happen. Possible TF values are defined by enum amdgpu_transfer_function. Pre-defined transfer functions:
  • there is no MPC Gamma ROM. Curves are calculated by AMD color modules.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting CRTC Gamma TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

Others

AMD CRTC Shaper and 3D LUT We have previously worked on exposing CRTC shaper and CRTC 3D LUT, but they were removed from the AMD driver-specific color series because they lack userspace case. CRTC shaper and 3D LUT works similar to plane shaper and 3D LUT but after blending (MPC block). The difference here is that setting (not bypass) Shaper and Gamma blocks together are not expected, since both blocks are used to delinearize the input space. In summary, we either set Shaper + 3D LUT or Gamma.

Input and Output Color Space Conversion There are two other color capabilities of AMD display hardware that were integrated to DRM by previous works and worth a brief explanation here. The DC Input CSC sets pre-defined coefficients from the values of DRM plane color_range and color_encoding properties. It is used for color space conversion of the input content. On the other hand, we have de DC Output CSC (OCSC) sets pre-defined coefficients from DRM connector colorspace properties. It is uses for color space conversion of the composed image to the one supported by the sink. References:

The search for rainbow treasures is not over yet If you want to understand a little more about this work, be sure to watch Joshua and I presented two talks at XDC 2023 about AMD/Steam Deck colors on Gamescope: In the time between the first and second part of this blog post, Uma Shashank and Chaitanya Kumar Borah published the plane color pipeline for Intel and Harry Wentland implemented a generic API for DRM based on VKMS support. We discussed these two proposals and the next steps for Color on Linux during the Color Management workshop at XDC 2023 and I briefly shared workshop results in the 2023 XDC lightning talk session. The search for rainbow treasures is not over yet! We plan to meet again next year in the 2024 Display Hackfest in Coru a-Spain (Igalia s HQ) to keep up the pace and continue advancing today s display needs on Linux. Finally, a HUGE thank you to everyone who worked with me on exploring AMD s color capabilities and making them available in userspace.

5 November 2023

Petter Reinholdtsen: Test framework for DocBook processors / formatters

All the books I have published so far has been using DocBook somewhere in the process. For the first book, the source format was DocBook, while for every later book it was an intermediate format used as the stepping stone to be able to present the same manuscript in several formats, on paper, as ebook in ePub format, as a HTML page and as a PDF file either for paper production or for Internet consumption. This is made possible with a wide variety of free software tools with DocBook support in Debian. The source format of later books have been docx via rst, Markdown, Filemaker and Asciidoc, and for all of these I was able to generate a suitable DocBook file for further processing using pandoc, a2x and asciidoctor, as well as rendering using xmlto, dbtoepub, dblatex, docbook-xsl and fop. Most of the books I have published are translated books, with English as the source language. The use of po4a to handle translations using the gettext PO format has been a blessing, but publishing translated books had triggered the need to ensure the DocBook tools handle relevant languages correctly. For every new language I have published, I had to submit patches dblatex, dbtoepub and docbook-xsl fixing incorrect language and country specific issues in the framework themselves. Typically this has been missing keywords like 'figure' or sort ordering of index entries. After a while it became tiresome to only discover issues like this by accident, and I decided to write a DocBook "test framework" exercising various features of DocBook and allowing me to see all features exercised for a given language. It consist of a set of DocBook files, a version 4 book, a version 5 book, a v4 book set, a v4 selection of problematic tables, one v4 testing sidefloat and finally one v4 testing a book of articles. The DocBook files are accompanied with a set of build rules for building PDF using dblatex and docbook-xsl/fop, HTML using xmlto or docbook-xsl and epub using dbtoepub. The result is a set of files visualizing footnotes, indexes, table of content list, figures, formulas and other DocBook features, allowing for a quick review on the completeness of the given locale settings. To build with a different language setting, all one need to do is edit the lang= value in the .xml file to pick a different ISO 639 code value and run 'make'. The test framework source code is available from Codeberg, and a generated set of presentations of the various examples is available as Codeberg static web pages at https://pere.codeberg.page/docbook-example/. Using this test framework I have been able to discover and report several bugs and missing features in various tools, and got a lot of them fixed. For example I got Northern Sami keywords added to both docbook-xsl and dblatex, fixed several typos in Norwegian bokm l and Norwegian Nynorsk, support for non-ascii title IDs added to pandoc, Norwegian index sorting support fixed in xindy and initial Norwegian Bokm l support added to dblatex. Some issues still remains, though. Default index sorting rules are still broken in several tools, so the Norwegian letters , and are more often than not sorted properly in the book index. The test framework recently received some more polish, as part of publishing my latest book. This book contained a lot of fairly complex tables, which exposed bugs in some of the tools. This made me add a new test file with various tables, as well as spend some time to brush up the build rules. My goal is for the test framework to exercise all DocBook features to make it easier to see which features work with different processors, and hopefully get them all to support the full set of DocBook features. Feel free to send patches to extend the test set, and test it with your favorite DocBook processor. Please visit these two URLs to learn more: If you want to learn more on Docbook and translations, I recommend having a look at the the DocBook web site, the DoCookBook site and my earlier blog post on how the Skolelinux project process and translate documentation, a talk I gave earlier this year on how to translate and publish books using free software (Norwegian only). As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25 October 2023

Sven Hoexter: Curing vpnc-scripts Symptoms

I stick to some very archaic workflows, e.g. to connect to some corp VPN I just run sudo vpnc-connect and later on sudo vpnc-disconnect. In the past that also managed to restore my resolv.conf, currently it doesn't. According to a colleague that's also the case for Ubuntu. Taking a step back, the sane way would be to use the NetworkManager vpnc plugin, but that does not work with this specific case because we use uncool VPN tech which requires the Enable weak authentication setting for vpnc. There is a feature request open for that one at https://gitlab.gnome.org/GNOME/NetworkManager-vpnc/-/issues/11 Taking another step back I thought that it shouldn't be that hard to add some checkbox, a boolean and render out another config flag or line in a config file. Not as intuitive as I thought this mix of XML and C. So let's quickly look elsewhere. What happens is that the backup files in /var/run/vpnc/ are created by the vpnc-scripts script called vpnc-script, but not moved back, because it adds some pid as a suffix and the pid is not the final pid of the vpnc process. Basically it can not find the backup when it tries to restore it. So I decided to replace the pid guessing code with a suffix made up of the gateway IP and the tun interface name. No idea if that is stable in all circumstance (someone with a vpn name DNS RR?) or several connections to different gateways. But good enough for myself, so here is my patch:
vpnc-scripts [master]$ cat debian/patches/replace-pid-detection 
Index: vpnc-scripts/vpnc-script
===================================================================
--- vpnc-scripts.orig/vpnc-script
+++ vpnc-scripts/vpnc-script
@@ -91,21 +91,15 @@ OS=" uname -s "
 HOOKS_DIR=/etc/vpnc
-# Use the PID of the controlling process (vpnc or OpenConnect) to
-# uniquely identify this VPN connection. Normally, the parent process
-# is a shell, and the grandparent's PID is the relevant one.
-# OpenConnect v9.0+ provides VPNPID, so we don't need to determine it.
-if [ -z "$VPNPID" ]; then
-    VPNPID=$PPID
-    PCMD= ps -c -o cmd= -p $PPID 
-    case "$PCMD" in
-        *sh) VPNPID= ps -o ppid= -p $PPID  ;;
-    esac
+# This whole script is called twice via vpnc-connect. On the first run
+# the variables are empty. Catch that and move on when they're there.
+if [ -n "$VPNGATEWAY" ]; then
+    BACKUPID="$ VPNGATEWAY _$ TUNDEV "
+    DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.$ BACKUPID 
+    DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.$ BACKUPID 
+    RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.$ BACKUPID 
 fi
-DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.$ VPNPID 
-DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.$ VPNPID 
-RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.$ VPNPID 
 SCRIPTNAME= basename $0 
 # some systems, eg. Darwin & FreeBSD, prune /var/run on boot
Or rolled into a debian package at https://sven.stormbind.net/debian/vpnc-scripts/ The colleague decided to stick to NetworkManager, moved the vpnc binary aside and added a wrapper which invokes vpnc with --enable-weak-authentication. The beauty is, all of this will break on updates, so at some point someone has to understand GTK4 to fix the NetworkManager plugin for good. :)

14 October 2023

Ravi Dwivedi: Kochi - Wayanad Trip in August-September 2023

A trip full of hitchhiking, beautiful places and welcoming locals.

Day 1: Arrival in Kochi Kochi is a city in the state of Kerala, India. This year s DebConf was to be held in Kochi from 3rd September to 17th of September, which I was planning to attend. My friend Suresh, who was planning to join, told me that 29th August 2023 will be Onam, a major festival of the state of Kerala. So, we planned a Kerala trip before the DebConf. We booked early morning flights for Kochi from Delhi and reached Kochi on 28th August. We had booked a hostel named Zostel in Ernakulam. During check-in, they asked me to fill a form which required signing in using a Google account. I told them I don t have a Google account and I don t want to create one either. The people at the front desk seemed receptive, so I went ahead with telling them the problems of such a sign-in being mandatory for check-in. Anyways, they only took a photo of my passport and let me check-in without a Google account. We stayed in a ten room dormitory, which allowed travellers of any gender. The dormitory room was air-conditioned, spacious, clean and beds were also comfortable. There were two bathrooms in the dormitory and they were clean. Plus, there was a separate dormitory room in the hostel exclusive for females. I noticed that that Zostel was not added in the OpenStreetMap and so, I added it :) . The hostel had a small canteen for tea and snacks, a common sitting area outside the dormitories, which had beds too. There was a separate silent room, suitable for people who want to work.
Dormitory room in Zostel Ernakulam, Kochi.
Beds in Zostel Ernakulam, Kochi.
We had lunch at a nearby restaurant and it was hard to find anything vegetarian for me. I bought some freshly made banana chips from the street and they were tasty. As far as I remember, I had a big glass of pineapple juice for lunch. Then I went to the Broadway market and bought some cardamom and cinnamon for home. I also went to a nearby supermarket and bought Matta brown rice for home. Then, I looked for a courier shop to send the things home but all of them were closed due to Onam festival. After returning to the Zostel, I overslept till 9 PM and in the meanwhile, Suresh planned with Saidut and Shwetank (who met us during our stay in Zostel) to go to a place in Fort Kochi for dinner. I suspected I will be disappointed by lack of vegetarian options as they were planning to have fish. I already had a restaurant in mind - Brindhavan restaurant (suggested by Anupa), which was a pure vegetarian restaurant. To reach there, I got off at Palarivattom metro station and started looking for an auto-rickshaw to get to the restaurant. I didn t get any for more than 5 minutes. Since that restaurant was not added to the OpenStreetMap, I didn t even know how far that was and which direction to go to. Then, I saw a Zomato delivery person on a motorcycle and asked him where the restaurant was. It was already 10 PM and the restaurant closes at 10:30. So, I asked him whether he can drop me off. He agreed and dropped me off at that restaurant. It was 4-5 km from that metro station. I tipped him and expressed my gratefulness for the help. He refused to take the tip, but I insisted and he accepted. I entered the restaurant and it was coming to a close, so many items were not available. I ordered some Kadhai Paneer (only item left) with naan. It tasted fine. Since the next day was Thiruvonam, I asked the restaurant about the Sadya thali menu and prices for the next day. I planned to eat Sadya thali at that restaurant, but my plans got changed later.
Onam sadya menu from Brindhavan restaurant.

Day 2: Onam celebrations Next day, on 29th of August 2023, we had plan to leave for Wayanad. Wayanad is a hill station in Kerala and a famous tourist spot. Praveen suggested to visit Munnar as it is far closer to Kochi than Wayanad (80 km vs 250 km). But I had already visited Munnar in my previous trips, so we chose Wayanad. We had a train late night from Ernakulam Junction (at 23:30 hours) to Kozhikode, which is the nearest railway station from Wayanad. So, we checked out in the morning as we had plans to roam around in Kochi before taking the train. Zostel was celebrating Onam on that day. To opt-in, we had to pay 400 rupees, which included a Sadya Thali and a mundu. Me and Suresh paid the amount and opted in for the celebrations. Sadya thali had Rice, Sambhar, Rasam, Avial, Banana Chips, Pineapple Pachadi, Pappadam, many types of pickels and chutneys, Pal Ada Payasam and Coconut jaggery Pasam. And, there was water too :). Those payasams were really great and I had one more round of them. Later, I had a lot of variety of payasams during the DebConf.
Sadya lined up for serving
Sadya thali served on banana leaf.
So, we hung out in the common room and put our luggage there. We played UNO and had conversations with other travellers in the hostel. I had a fun time there and I still think it is one of the best hostel experiences I had. We made good friends with Saiduth (Telangana) and Shwetank (Uttarakhand). They were already aware about the software like debian, and we had some detailed conversations about the Free Software movement. I remember explaining the difference between the terms Open Source and Free Software . I also told them about the Streetcomplete app, a beginner friendly app to edit OpenStreetMap. We had dinner at a place nearby (named Palaraam), but again, the vegetarian options were very limited! After dinner, we came back to the Zostel and me and Suresh left for Ernakulam Junction to catch our train Maveli Express (16604).

Day 3: Going to Wayanad Maveli Express was scheduled to reach Kozhikode at 03:25 (morning). I had set alarms from 03:00 to 03:30, with the gap of 10 minutes. Every time I woke up, I turned off the alarm. Then I woke up and saw train reaching the Kozhikode station and woke up Suresh for deboarding. But then I noticed that the train is actually leaving the station, not arriving! This means we missed our stop. Now we looked at the next stops and whether we can deboard there. I was very sleepy and wanted to take a retiring room at some station before continuing our journey to Wayanad. The next stop was Quilandi and we checked online that it didn t have a retiring room. So, we skipped this stop. We got off at the next stop named Vadakara and found out no retiring room was available. So, we asked about information regarding bus for Wayanad and they said that there is a bus to Wayanad around 07:00 hours from bus station which was a few kilometres from the railway station. We took a bus for Kalpetta (in Wayanad) at around 07:00. The destination of the buses were written in Malayalam, which we could not read. Once again, the locals helped us to get on to the bus to Kalpetta. Vadakara is not a big city and it can be hard to find people who know good Hindi or English, unlike Kochi. Despite language issues, I had no problem there in navigation, thanks to locals. I mostly spent time sleeping during the bus journey. A few hours later, the bus dropped us at Kalpetta. We had a booking at a hostel in Rippon village. It was 16 km from Kalpetta. On the way, we were treated with beautiful views of nature, which was present everywhere in Wayanad. The place was covered with tea gardens and our eyes were treated with beautiful scenery at every corner.
We were treated with such views during the Wayanad trip.
Rippon village was a very quiet place and I liked the calm atmosphere. This place is blessed by nature and has stunning scenery. I found English was more common than Hindi in Wayanad. Locals were very nice and helped me, even if they didn t know my language.
A road in Rippon.
After catching some sleep at the hostel, I went out in the afternoon. I hitchhiked to reach the main road from the hostel. I bought more spices from a nearby shop and realized that I should have waited for my visit to Wayanad to buy cardamom, which I already bought from Kochi. Then, I was looking for post office to send spices home. The people at the spices shop told me that the nearby Rippon post office was closed by that time, but the post office at Meppadi was open, which was 5 km from there. I went to Meppadi and saw the post office closes at 15:00, but I reached five minutes late. My packing was not very good and they asked me to pack it tighter. There was a shop near the post office and the people there gave me a cardboard and tapes, and helped pack my stuff for the post. By the time I went to the post office again, it was 15:30. But they accepted my parcel for post.

Day 4: Kanthanpara Falls, Zostel Wayanad and Karapuzha Dam Kanthanpara waterfalls were 2 km from the hostel. I hitchhiked to the place from the hostel on a scooty. Entry ticket was worth Rs 40. There were good views inside and nothing much to see except the waterfalls.
Entry to Kanthanpara Falls.
Kanthanpara Falls.
We had a booking at Zostel Wayanad for this day and so we shifted there. Again, as with their Ernakulam branch, they asked me to fill a form which required signing in using Google, but when I said I don t have a Google account they checked me in without that. There were tea gardens inside the Zostel boundaries and the property was beautiful.
A view of Zostel Wayanad.
A map of Wayanad showing tourist places.
A view from inside the Zostel Wayanad property.
Later in the evening, I went to Karapuzha Dam. I witnessed a beautiful sunset during the journey. Karapuzha dam had many activites, like ziplining, and was nice to roam around. Chembra Peak is near to the Zostel Wayanad. So, I was planning to trek to the heart shaped lake. It was suggested by Praveen and looking online, this trek seemed worth doing. There was an issue however. The charges for trek were Rs 1770 for upto five people. So, if I go alone I will have to spend Rs 1770 for the trek. If I go with another person, we split Rs 1770 into two, and so on. The optimal way to do it is to go in a group of five (you included :D). I asked front desk at Zostel if they can connect me with people going to Chembra peak the next day, and they told me about a group of four people planning to go to Chembra peak the next day. I got lucky! All four of them were from Kerala and worked in Qatar.

Day 5: Chembra peak trek The date was 1st September 2023. I woke up early (05:30 in the morning) for the Chembra peak trek. I had bought hiking shoes especially for trekking, which turned out to be a very good idea. The ticket counter opens at 07:00. The group of four with which I planned to trek met me around 06:00 in the Zostel. We went to the ticket counter around 06:30. We had breakfast at shops selling Maggi noodles and bread omlette near the ticket counter. It was a hot day and the trek was difficult for an inexperienced person like me. The scenery was green and beautiful throughout.
Terrain during trekking towards the Chembra peak.
Heart-shaped lake at the Chembra peak.
Me at the heart-shaped lake.
Views from the top of the Chembra peak.
View of another peak from the heart-shaped lake.
While returning from the trek, I found out a shop selling bamboo rice, which I bought and will make bamboo rice payasam out of it at home (I have some coconut milk from Kerala too ;)). We returned to Zostel in the afternoon. I had muscle pain after the trek and it has still not completely disappeared. At night, we took a bus from Kalpetta to Kozhikode in order to return to Kochi.

Day 6: Return to Kochi At midnight of 2nd of September, we reached Kozhikode bus stand. Then we roamed around for something to eat. I didn t find anything vegetarian to eat. No surprises there! Then we went to Kozhikode railway station and looked for retiring rooms, but no luck there. We waited at the station and took the next train to Kochi at 03:30 and reached Ernakulam Junction at 07:30 (half hours before train s scheduled time!). From there, we went to Zostel Fort Kochi and stayed one night there and checked out next morning.

Day 7: Roaming around in Fort Kochi On 3rd of September, we roamed around in Fort Kochi. We visited the usual places - St Francis Church, Dutch Palace, Jew Town, Pardesi Synagogue. I also visited some homestays and the owners were very happy to show their place even when I made it clear that I was not looking for a stay. In the evening, we went to Kakkanad to attend DebConf. The story continues in my DebConf23 blog post.

11 October 2023

Iustin Pop: Not-quite-announcement: this blog is not entirely dead

Something, something then something else, and it s been another six months since I last wrote anything. The world is crazy, somehow there s no time for anything, and yet life move forward, inexorably. (Well, without going into gory side-notes about how evil stops life in various parts of the world.) As a proof that this blog is not dead, I ve finally implemented proper per-page keywords support, rather than the hard-coded keywords that were present before. Well, those defaults are still present in all pages that don t declare keywords, will probably remove them at one point. Implementing this in Hakyll is not hard, at all, if one semi-regularly writes code. There are a number of examples out there, but what was confusing: Anyway, after 2 files changed, 23 insertions(+), 8 deletions(-), this page is the first one to have any keywords. And the funny thing, after initially struggling to write a single line of Haskell, the final version of the keywords is something like this:
 
    where renderKeywords =
            return . escapeHtml . intercalate "," . map trim . splitAll ","
And I wrote that as is, compiled successfully, and did the right thing from the first try. I only write some Haskell code 2-3 times per year, but man, I love this language. Until next time, hopefully before 2024

3 October 2023

Russ Allbery: Review: Monstrous Regiment

Review: Monstrous Regiment, by Terry Pratchett
Series: Discworld #31
Publisher: Harper
Copyright: October 2003
Printing: August 2014
ISBN: 0-06-230741-X
Format: Mass market
Pages: 457
Monstrous Regiment is the 31st Discworld novel, but it mostly stands by itself. You arguably could start here, although you would miss the significance of Vimes's presence and the references to The Truth. The graphical reading order guide puts it loosely after The Truth and roughly in the Industrial Revolution sequence, but the connections are rather faint.
There was always a war. Usually they were border disputes, the national equivalent of complaining that the neighbor was letting their hedge row grow too long. Sometimes they were bigger. Borogravia was a peace-loving country in the middle of treacherous, devious, warlike enemies. They had to be treacherous, devious, and warlike; otherwise, we wouldn't be fighting them, eh? There was always a war.
Polly's brother, who wanted nothing more than to paint (something that the god Nuggan and the ever-present Duchess certainly did not consider appropriate for a strapping young man), was recruited to fight in the war and never came back. Polly is worried about him and tired of waiting for news. Exit Polly, innkeeper's daughter, and enter the young lad Oliver Perks, who finds the army recruiters in a tavern the next town over. One kiss of the Duchess's portrait later, and Polly is a private in the Borogravian army. I suspect this is some people's favorite Discworld novel. If so, I understand why. It was not mine, for reasons that I'll get into, but which are largely not Pratchett's fault and fall more into the category of pet peeves. Pratchett has dealt with both war and gender in the same book before. Jingo is also about a war pushed by a ruling class for stupid reasons, and featured a substantial subplot about Nobby cross-dressing that turns into a deeper character re-evaluation. I thought the war part of Monstrous Regiment was weaker (this is part of my complaint below), but gender gets a considerably deeper treatment. Monstrous Regiment is partly about how arbitrary and nonsensical gender roles are, and largely about how arbitrary and abusive social structures can become weirdly enduring because they build up their own internally reinforcing momentum. No one knows how to stop them, and a lot of people find familiar misery less frightening than unknown change, so the structure continues despite serving no defensible purpose. Recently, there was a brief attempt in some circles to claim Pratchett posthumously for the anti-transgender cause in the UK. Pratchett's daughter was having none of it, and any Pratchett reader should have been able to reject that out of hand, but Monstrous Regiment is a comprehensive refutation written by Pratchett himself some twenty years earlier. Polly is herself is not transgender. She thinks of herself as a woman throughout the book; she's just pretending to be a boy. But she also rejects binary gender roles with the scathing dismissal of someone who knows first-hand how superficial they are, and there is at least one transgender character in this novel (although to say who would be a spoiler). By the end of the book, you will have no doubt that Pratchett's opinion about people imposing gender roles on others is the same as his opinion about every other attempt to treat people as things. That said, by 2023 standards the treatment of gender here seems... naive? I think 2003 may sadly have been a more innocent time. We're now deep into a vicious backlash against any attempt to question binary gender assignment, but very little of that nastiness and malice is present here. In one way, this is a feature; there's more than enough of that in real life. However, it also makes the undermining of gender roles feel a bit too easy. There are good in-story reasons for why it's relatively simple for Polly to pass as a boy, but I still spent a lot of the book thinking that passing as a private in the army would be a lot harder and riskier than this. Pratchett can't resist a lot of cross-dressing and gender befuddlement jokes, all of which are kindly and wry but (at least for me) hit a bit differently in 2023 than they would have in 2003. The climax of the story is also a reference to a classic UK novel that to even name would be to spoil one or both of the books, but which I thought pulled the punch of the story and dissipated a lot of the built-up emotional energy. My larger complaints, though, are more idiosyncratic. This is a war novel about the enlisted ranks, including the hazing rituals involved in joining the military. There are things I love about military fiction, but apparently that reaction requires I have some sympathy for the fight or the goals of the institution. Monstrous Regiment falls into the class of war stories where the war is pointless and the system is abusive but the camaraderie in the ranks makes service oddly worthwhile, if not entirely justifiable. This is a real feeling that many veterans do have about military service, and I don't mean to question it, but apparently reading about it makes me grumbly. There's only so much of the apparently gruff sergeant with a heart of gold that I can take before I start wondering why we glorify hazing rituals as a type of tough love, or why the person with some authority doesn't put a direct stop to nastiness instead of providing moral support so subtle you could easily blink and miss it. Let alone the more basic problems like none of these people should have to be here doing this, or lots of people are being mangled and killed to make possible this heart-warming friendship. Like I said earlier, this is a me problem, not a Pratchett problem. He's writing a perfectly reasonable story in a genre I just happen to dislike. He's even undermining the genre in the process, just not quite fast enough or thoroughly enough for my taste. A related grumble is that Monstrous Regiment is very invested in the military trope of naive and somewhat incompetent officers who have to be led by the nose by experienced sergeants into making the right decision. I have never been in the military, but I work in an industry in which it is common to treat management as useless incompetents at best and actively malicious forces at worst. This is, to me, one of the most persistently obnoxious attitudes in my profession, and apparently my dislike of it carries over as a low tolerance for this very common attitude towards military hierarchy. A full expansion of this point would mostly be about the purpose of management, division of labor, and people's persistent dismissal of skills they don't personally have and may perceive as gendered, and while some of that is tangentially related to this book, it's not closely-related enough for me to bore you with it in a review. Maybe I'll write a stand-alone blog post someday. Suffice it to say that Pratchett deployed a common trope that most people would laugh at and read past without a second thought, but that for my own reasons started getting under my skin by the end of the novel. All of that grumbling aside, I did like this book. It is a very solid Discworld novel that does all the typical things a Discworld novel does: likable protagonists you can root for, odd and fascinating side characters, sharp and witty observations of human nature, and a mostly enjoyable ending where most of the right things happen. Polly is great; I was very happy to read a book from her perspective and would happily read more. Vimes makes a few appearances being Vimes, and while I found his approach in this book less satisfying than in Jingo, I'll still take it. And the examination of gender roles, even if a bit less fraught than current politics, is solid Pratchett morality. The best part of this book for me, by far, is Wazzer. I think that subplot was the most Discworld part of this book: a deeply devout belief in a pseudo-godlike figure that is part of the abusive social structure that creates many of the problems of the book becomes something considerably stranger and more wonderful. There is a type of belief that is so powerful that it transforms the target of that belief, at least in worlds like Discworld that have a lot of ambient magic. Not many people have that type of belief, and having it is not a comfortable experience, but it makes for a truly excellent story. Monstrous Regiment is a solid Discworld novel. It was not one of my favorites, but it probably will be someone else's favorite for a host of good reasons. Good stuff; if you've read this far, you will enjoy it. Followed by A Hat Full of Sky in publication order, and thematically (but very loosely) by Going Postal. Rating: 8 out of 10

30 September 2023

Ian Jackson: DKIM: rotate and publish your keys

If you are an email system administrator, you are probably using DKIM to sign your outgoing emails. You should be rotating the key regularly and automatically, and publishing old private keys. I have just released dkim-rotate 1.0; dkim-rotate is a tool to do this key rotation and publication. If you are an email user, your email provider ought to be doing this. If this is not done, your emails are non-repudiable , meaning that if they are leaked, anyone (eg, journalists, haters) can verify that they are authentic, and prove that to others. This is not desirable (for you). Non-repudiation of emails is undesirable This problem was described at some length in Matthew Green s article Ok Google: please publish your DKIM secret keys. Avoiding non-repudiation sounds a bit like lying. After all, I m advising creating a situation where some people can t verify that something is true, even though it is. So I m advocating casting doubt. Crucially, though, it s doubt about facts that ought to be private. When you send an email, that s between you and the recipient. Normally you don t intend for anyone, anywhere, who happens to get a copy, to be able to verify that it was really you that sent it. In practical terms, this verifiability has already been used by journalists to verify stolen emails. Associated Press provide a verification tool. Advice for all email users As a user, you probably don t want your emails to be non-repudiable. (Other people might want to be able to prove you sent some email, but your email system ought to serve your interests, not theirs.) So, your email provider ought to be rotating their DKIM keys, and publishing their old ones. At a rough guess, your provider probably isn t :-(. How to tell by looking at email headers A quick and dirty way to guess is to have a friend look at the email headers of a message you sent. (It is important that the friend uses a different email provider, since often DKIM signatures are not applied within a single email system.) If your friend sees a DKIM-Signature header then the message is DKIM signed. If they don t, then it wasn t. Most email traversing the public internet is DKIM signed nowadays; so if they don t see the header probably they re not looking using the right tools, or they re actually on the same email system as you. In messages signed by a system running dkim-rotate, there will also be a header about the key rotation, to notify potential verifiers of the situation. Other systems that avoid non-repudiation-through-DKIM might do something similar. dkim-rotate s header looks like this:
DKIM-Signature-Warning: NOTE REGARDING DKIM KEY COMPROMISE
 https://www.chiark.greenend.org.uk/dkim-rotate/README.txt
 https://www.chiark.greenend.org.uk/dkim-rotate/ae/aeb689c2066c5b3fee673355309fe1c7.pem
But an email system might do half of the job of dkim-rotate: regularly rotating the key would cause the signatures of old emails to fail to verify, which is a good start. In that case there probably won t be such a header. Testing verification of new and old messages You can also try verifying the signatures. This isn t entirely straightforward, especially if you don t have access to low-level mail tooling. Your friend will need to be able to save emails as raw whole headers and body, un-decoded, un-rendered. If your friend is using a traditional Unix mail program, they should save the message as an mbox file. Otherwise, ProPublica have instructions for attaching and transferring and obtaining the raw email. (Scroll down to How to Check DKIM and ARC .) Checking that recent emails are verifiable Firstly, have your friend test that they can in fact verify a DKIM signature. This will demonstrate that the next test, where the verification is supposed to fail, is working properly and fails for the right reasons. Send your friend a test email now, and have them do this on a Linux system:
    # save the message as test-email.mbox
    apt install libmail-dkim-perl # or equivalent on another distro
    dkimproxy-verify <test-email.mbox
You should see output containing something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: pass
    ...
If the output ontains verify result: fail (body has been altered) then probably your friend didn t manage to faithfully save the unalterered raw message. Checking old emails cannot be verified When you both have that working, have your friend find an older email of yours, from (say) month ago. Perform the same steps. Hopefully they will see something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: fail (bad RSA signature)
or maybe
    verify result: invalid (public key: not available)
This indicates that this old email can no longer be verified. That s good: it means that anyone who steals a copy, can t verify it either. If it s leaked, the journalist who receives it won t know it s genuine and unmodified; they should then be suspicious. If your friend sees verify result: pass, then they have verified that that old email of yours is genuine. Anyone who had a copy of the mail can do that. This is good for email thieves, but not for you. For email admins: announcing dkim-rotate 1.0 I have been running dkim-rotate 0.4 on my infrastructure, since last August. and I had entirely forgotten about it: it has run flawlessly for a year. I was reminded of the topic by seeing DKIM in other blog posts. Obviously, it is time to decreee that dkim-rotate is 1.0. If you re a mail system administrator, your users are best served if you use something like dkim-rotate. The package is available in Debian stable, and supports Exim out of the box, but other MTAs should be easy to support too, via some simple ad-hoc scripting. Limitation of this approach Even with this key rotation approach, emails remain nonrepudiable for a short period after they re sent - typically, a few days. Someone who obtains a leaked email very promptly, and shows it to the journalist (for example) right away, can still convince the journalist. This is not great, but at least it doesn t apply to the vast bulk of your email archive. There are possible email protocol improvements which might help, but they re quite out of scope for this article.
Edited 2023-10-01 00:20 +01:00 to fix some grammar


comment count unavailable comments

Next.

Previous.